Compare commits

..

46 Commits

Author SHA1 Message Date
Grzegorz Michalski
a35e28042b feat(FILE_ARCHIVER): Improve archival logic and error handling in FILE_ARCHIVER procedures 2026-03-23 11:48:37 +01:00
Grzegorz Michalski
92feb95ae0 feat(FILE_ARCHIVER): Enhance documentation with new function details and clarify private functions 2026-03-20 13:37:05 +01:00
Grzegorz Michalski
74b8857096 feat(FILE_ARCHIVER): Rename IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH for consistency in configuration 2026-03-20 13:26:37 +01:00
Grzegorz Michalski
24997b1583 feat(MARS-1409): Add prerequisite checks for MARS-1409 objects in installation script 2026-03-20 13:13:26 +01:00
Grzegorz Michalski
eb9b2bc38b feat(FILE_MANAGER): Rename pIsKeepInTrash to pIsKeptInTrash for consistency in parameter naming 2026-03-19 13:29:51 +01:00
Grzegorz Michalski
2ea708a694 Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-19 12:26:27 +01:00
Grzegorz Michalski
12c58f32a3 feat(MARS-1409): Rename IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH across relevant files and update related logic 2026-03-19 12:23:29 +01:00
Grzegorz Michalski
811df6e8b1 feat(FILE_ARCHIVER): Enhance logging messages to include detailed parameters for better error tracking 2026-03-19 12:14:35 +01:00
Grzegorz Michalski
c2e9409e55 feat(FILE_ARCHIVER): Enhance logging by adding parameters to log events for better traceability 2026-03-19 11:51:37 +01:00
Grzegorz Michalski
c96bf2051f Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-19 11:13:37 +01:00
Grzegorz Michalski
5d0e03d7ad feat(MARS-1409): Add DATA_EXPORTER package installation and rollback scripts 2026-03-19 11:13:09 +01:00
Grzegorz Michalski
ffd6c7eeae feat(ENV_MANAGER): Add new error codes for workflow key validation and update package version to 3.3.0
refactor(FILE_MANAGER): Remove redundant error logging for unknown errors
2026-03-19 11:13:02 +01:00
Grzegorz Michalski
bbdf008125 Add DATA_EXPORTER package for comprehensive data export capabilities
- Introduced CT_MRDS.DATA_EXPORTER package to facilitate data exports in CSV and Parquet formats.
- Implemented support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
- Added versioning and detailed version history for tracking changes and improvements.
- Included main export procedures: EXPORT_TABLE_DATA, EXPORT_TABLE_DATA_BY_DATE, and EXPORT_TABLE_DATA_TO_CSV_BY_DATE.
- Enhanced parallel processing capabilities for improved performance during data exports.
2026-03-19 10:50:28 +01:00
Grzegorz Michalski
396e7416f6 feat(FILE_ARCHIVER): Update SQL query in ARCHIVE_TABLE_DATA for improved archival statistics and column order consistency 2026-03-19 09:37:42 +01:00
Grzegorz Michalski
0ed75875ac Refactor MARS-1409: Rollback changes to A_SOURCE_FILE_RECEIVED and related tables
- Dropped A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED with data preservation.
- Removed unnecessary checks for column existence during rollback.
- Updated A_SOURCE_FILE_CONFIG, A_TABLE_STAT, and A_TABLE_STAT_HIST to their pre-MARS-1409 structures, excluding new columns added in MARS-1409.
- Adjusted FILE_ARCHIVER package to reflect changes in statistics handling and archival triggers.
- Revised rollback script to ensure proper order of operations for restoring previous versions of packages and tables.
2026-03-19 08:46:49 +01:00
Grzegorz Michalski
a7db9b67bc Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-18 18:19:19 +01:00
Grzegorz Michalski
ce9b6eeff6 feat(FILE_MANAGER): Update package version to 3.6.3 and enhance ADD_SOURCE_FILE_CONFIG with new parameters for archival control
- Bump package version to 3.6.3 and update build date.
- Add new parameters: pIsArchiveEnabled, pIsKeepInTrash, pArchivalStrategy, pMinimumAgeMonths to ADD_SOURCE_FILE_CONFIG.
- Include pIsWorkflowSuccessRequired parameter to control workflow success requirement for archival.
- Update version history to reflect changes.

feat(A_SOURCE_FILE_CONFIG): Modify table structure to include new archival control flags

- Add IS_WORKFLOW_SUCCESS_REQUIRED column to A_SOURCE_FILE_CONFIG for workflow bypass functionality.
- Update constraints and comments for new columns.
- Ensure backward compatibility with default values.

fix(A_TABLE_STAT, A_TABLE_STAT_HIST): Extend table structures to accommodate new workflow success tracking

- Add IS_WORKFLOW_SUCCESS_REQUIRED column to both A_TABLE_STAT and A_TABLE_STAT_HIST.
- Update comments to clarify the purpose of new columns.

docs(FILE_ARCHIVER_Guide): Revise documentation to reflect new archival features and configurations

- Document new IS_WORKFLOW_SUCCESS_REQUIRED flag and its implications for archival processes.
- Update examples and configurations to align with recent changes in the database schema.
- Ensure clarity on archival strategies and their configurations.
2026-03-18 18:19:04 +01:00
Grzegorz Michalski
0725119b45 feat: Enhance MARS-1409 post-hook scripts to include checks for empty ODS tables and update installation script for workflow key diagnosis 2026-03-17 12:10:18 +01:00
Grzegorz Michalski
896e67bcb9 feat: Refactor A_SOURCE_FILE_CONFIG table structure and update comments for clarity 2026-03-17 10:58:01 +01:00
Grzegorz Michalski
ad5a6f393a feat: Update installation script to reflect expected duration for MARS-1409 post-hook process 2026-03-17 09:54:42 +01:00
Grzegorz Michalski
a4ac132b76 feat: Implement MARS-1409 changes to add ARCHIVAL_STRATEGY and ARCH_MINIMUM_AGE_MONTHS columns to A_TABLE_STAT and A_TABLE_STAT_HIST, and update FILE_ARCHIVER for handling these new fields 2026-03-17 08:23:14 +01:00
Grzegorz Michalski
6468d12349 minor 2026-03-13 13:51:53 +01:00
Grzegorz Michalski
fe0f7bce18 feat: Enhance FILE_ARCHIVER package to handle empty ODS bucket scenarios with improved statistics initialization 2026-03-13 13:34:38 +01:00
Grzegorz Michalski
6b2f60f413 feat: Update FILE_ARCHIVER package to version 3.3.1 with improved handling for empty ODS bucket scenarios 2026-03-13 11:40:19 +01:00
Grzegorz Michalski
ca11debd93 minor 2026-03-13 11:35:11 +01:00
Grzegorz Michalski
24e6bce18c minor changes 2026-03-13 11:34:59 +01:00
Grzegorz Michalski
aa03dd1616 feat: Update FILE_MANAGER package to version 3.6.1 with fixes for CHAR/NCHAR/NVARCHAR2 column definitions 2026-03-13 09:11:28 +01:00
Grzegorz Michalski
9190681051 MARS-1409-POSTHOOK 2026-03-13 09:08:44 +01:00
Grzegorz Michalski
096994d514 feat: Add diagnostic script for workflow key status in MARS-1409 post-hook 2026-03-13 08:43:14 +01:00
Grzegorz Michalski
1385bfb9e7 feat: Implement MARS-1409 post-hook for backfilling A_WORKFLOW_HISTORY_KEY
- Added .gitignore to exclude temporary folders.
- Created SQL script to update existing A_WORKFLOW_HISTORY_KEY in A_SOURCE_FILE_RECEIVED.
- Implemented rollback script to clear backfilled A_WORKFLOW_HISTORY_KEY values.
- Added README.md for installation and usage instructions.
- Developed master installation and rollback scripts for MARS-1409 post-hook.
- Verified installation and rollback processes with detailed checks.
- Updated trigger logic to manage workflow history updates.
- Ensured proper version tracking and verification for related packages.
2026-03-13 08:30:32 +01:00
Grzegorz Michalski
7d2fb34ad9 MARS-1005-PREHOOK 2026-03-12 08:51:15 +01:00
Grzegorz Michalski
202b535f9f Update DATA_EXPORTER package to v2.17.0: Fix RFC 4180 compliance and Parquet format corruption 2026-03-12 08:50:08 +01:00
Grzegorz Michalski
5ba6c30fda MARS-1005-PREHOOK 2026-03-11 10:34:47 +01:00
Grzegorz Michalski
64a4b9a2f0 Refactor rollback script to delete specific legacy files and adjust object URI construction 2026-03-09 11:46:01 +01:00
Grzegorz Michalski
dec3e7137e Refactor rollback script to delete only files registered by MARS-1005 and improve output messages 2026-03-09 10:24:24 +01:00
Grzegorz Michalski
0ecc119ee9 Refactor data integrity verification script to use A_ETL_LOAD_SET_FK instead of A_WORKFLOW_HISTORY_KEY 2026-03-09 09:31:30 +01:00
Grzegorz Michalski
182e6240d3 Update export script comments for clarity and consistency 2026-03-09 09:25:20 +01:00
Grzegorz Michalski
b81e524351 Refactor MARS-1005 scripts for OU_TOP legacy data export and rollback
- Updated SQL scripts to verify data integrity for 6 OU_TOP.LEGACY_* tables instead of 3 C2D MPEC tables.
- Modified rollback script to delete exported CSV files from ODS/TOP/ bucket paths.
- Enhanced verification script to check for remaining files and cloud bucket contents specific to MARS-1005.
- Adjusted install script to reflect changes in target tables and their corresponding paths in the ODS bucket.
- Updated README to include instructions for the new MARS-1005 installation and rollback processes.
2026-03-06 14:34:12 +01:00
Grzegorz Michalski
73e99b6e76 MARS-1005 2026-03-06 12:06:18 +01:00
Grzegorz Michalski
113ea0a618 Refactor MARS-1409 SQL scripts for workflow history key management
- Added checks for existing columns before adding or dropping A_WORKFLOW_HISTORY_KEY in relevant scripts to prevent errors.
- Updated rollback scripts to ensure proper restoration of previous states, including recompilation of dependent packages.
- Introduced a diagnostic script to assess the status of workflow keys against ODS tables, providing detailed reporting on discrepancies.
- Adjusted trigger definitions to accommodate new workflow names and ensure correct handling of workflow history.
- Modified master rollback script to streamline the rollback process and improve clarity in step descriptions.
2026-03-05 12:33:59 +01:00
Grzegorz Michalski
59e18d9b35 Add error handling for TRG_A_WORKFLOW_HISTORY trigger installation 2026-03-04 10:16:35 +01:00
Grzegorz Michalski
a58a5ae82a ignore export files 2026-03-03 09:48:43 +01:00
Grzegorz Michalski
b537719b64 added template tables 2026-03-03 09:47:24 +01:00
Grzegorz Michalski
4de14b64fb rmemove unneeded 2026-03-03 09:46:06 +01:00
Grzegorz Michalski
36a04dde04 MARS-1409 2026-03-02 14:26:12 +01:00
Grzegorz Michalski
cad6e63479 exported files from dev 2026-03-02 13:51:59 +01:00
281 changed files with 18024 additions and 817 deletions

2
.gitignore vendored
View File

@@ -19,6 +19,8 @@ issues/
ehthumbs.db
Thumbs.db
MARS_Packages/mrds_elt-dev-database/mrds_elt-dev-database/database/CT_MRDS/export/*
MARS_Packages/REL01/MARS-1056/confluence/
MARS_Packages/REL01/MARS-1056/log/
MARS_Packages/REL01/MARS-1046/confluence/

View File

@@ -0,0 +1,5 @@
# Exclude temporary folders from version control
confluence/
log/
test/
mock_data/

View File

@@ -0,0 +1,249 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Step 01: Update A_WORKFLOW_HISTORY_KEY for existing records
-- ============================================================================
-- Purpose: Populate A_WORKFLOW_HISTORY_KEY for existing A_SOURCE_FILE_RECEIVED records
-- by extracting values from corresponding ODS tables
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Prerequisites:
-- - MARS-1409 installed (A_WORKFLOW_HISTORY_KEY column exists in A_SOURCE_FILE_RECEIVED)
-- - ODS tables contain A_WORKFLOW_HISTORY_KEY and file$name columns
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Updating A_WORKFLOW_HISTORY_KEY for existing A_SOURCE_FILE_RECEIVED records...
DECLARE
vUpdatedTotal NUMBER := 0;
vUpdatedCurrent NUMBER := 0;
vFailedConfigs NUMBER := 0;
vTableNotFound NUMBER := 0;
vSkippedConfigs NUMBER := 0;
vEmptyTables NUMBER := 0;
vHasData NUMBER := 0;
vTableName VARCHAR2(200);
vSQL VARCHAR2(32767);
vRecordsToUpdate NUMBER := 0;
vRemainingTargeted NUMBER := 0;
vTableExists NUMBER := 0;
BEGIN
-- Count total records to update
SELECT COUNT(*) INTO vRecordsToUpdate
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL
AND PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED');
DBMS_OUTPUT.PUT_LINE('Found ' || vRecordsToUpdate || ' records with NULL A_WORKFLOW_HISTORY_KEY');
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
-- Process each INPUT configuration that has records to update
FOR config_rec IN (
SELECT
sfc.A_SOURCE_FILE_CONFIG_KEY,
sfc.A_SOURCE_KEY,
sfc.SOURCE_FILE_ID,
sfc.TABLE_ID,
sfc.TEMPLATE_TABLE_NAME,
(SELECT COUNT(*)
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = sfc.A_SOURCE_FILE_CONFIG_KEY
AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL
AND sfr.PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED')
) AS NULL_COUNT
FROM CT_MRDS.A_SOURCE_FILE_CONFIG sfc
WHERE sfc.SOURCE_FILE_TYPE = 'INPUT'
AND sfc.TABLE_ID IS NOT NULL
ORDER BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID, sfc.TABLE_ID
) LOOP
IF config_rec.NULL_COUNT = 0 THEN
vSkippedConfigs := vSkippedConfigs + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - no records to update');
CONTINUE;
END IF;
BEGIN
-- Construct ODS table name from TABLE_ID (ODS tables have _ODS suffix)
vTableName := 'ODS.' || config_rec.TABLE_ID || '_ODS';
-- Check table existence before attempting dynamic SQL
SELECT COUNT(*) INTO vTableExists
FROM ALL_TABLES
WHERE OWNER = 'ODS'
AND TABLE_NAME = config_rec.TABLE_ID || '_ODS';
IF vTableExists = 0 THEN
vTableNotFound := vTableNotFound + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - ODS table not found: ' || vTableName);
CONTINUE;
END IF;
-- Pre-check: verify ODS table has accessible data (empty external table throws ORA-29913/KUP-05002)
vHasData := 0;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM (SELECT 1 FROM ' || vTableName || ' t WHERE ROWNUM = 1)'
INTO vHasData;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -29913 OR INSTR(SQLERRM, 'KUP-05002') > 0 THEN
NULL; -- vHasData stays 0
ELSE
RAISE;
END IF;
END;
IF vHasData = 0 THEN
vEmptyTables := vEmptyTables + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - ODS table has no files at storage location (empty): ' || vTableName);
CONTINUE;
END IF;
DBMS_OUTPUT.PUT_LINE('Processing config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID || ')...');
-- Update using ODS table
-- NO_PARALLEL hint required: ODS external tables (OCI Object Storage) fail with ORA-12801 under parallel query
vSQL :=
'UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'SET A_WORKFLOW_HISTORY_KEY = ( ' ||
' SELECT /*+ NO_PARALLEL(t) */ t.A_WORKFLOW_HISTORY_KEY ' ||
' FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND rownum=1 ' ||
') ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :config_key ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'', ''READY_FOR_INGESTION'', ''INGESTED'', ''ARCHIVED'', ''ARCHIVED_AND_TRASHED'', ''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT /*+ NO_PARALLEL(t) */ 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND rownum=1 ' ||
' )';
EXECUTE IMMEDIATE vSQL USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
commit;
vUpdatedCurrent := SQL%ROWCOUNT;
vUpdatedTotal := vUpdatedTotal + vUpdatedCurrent;
IF vUpdatedCurrent > 0 THEN
DBMS_OUTPUT.PUT_LINE(' SUCCESS: Updated ' || vUpdatedCurrent || ' record(s)');
ELSE
DBMS_OUTPUT.PUT_LINE(' INFO: No matching records found in ODS table (files may not be ingested yet)');
END IF;
EXCEPTION
WHEN OTHERS THEN
vFailedConfigs := vFailedConfigs + 1;
DBMS_OUTPUT.PUT_LINE(' ERROR: Unexpected failure for config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (table: ' || vTableName || ')');
DBMS_OUTPUT.PUT_LINE(' Reason: ' || SQLERRM);
-- Continue processing other configurations despite this failure
END;
END LOOP;
COMMIT;
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
DBMS_OUTPUT.PUT_LINE('Update Summary:');
DBMS_OUTPUT.PUT_LINE(' Total records updated: ' || vUpdatedTotal);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (no NULL records): ' || vSkippedConfigs);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (ODS table not found): ' || vTableNotFound);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (ODS table empty - no files at location): ' || vEmptyTables);
DBMS_OUTPUT.PUT_LINE(' Configurations failed (unexpected errors): ' || vFailedConfigs);
-- Check remaining NULL records - targeted statuses only
SELECT COUNT(*) INTO vRemainingTargeted
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL
AND PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED');
-- Check all remaining NULL records (includes RECEIVED, VALIDATION_FAILED)
SELECT COUNT(*) INTO vRecordsToUpdate
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL;
DBMS_OUTPUT.PUT_LINE(' Remaining NULL records (targeted statuses): ' || vRemainingTargeted);
DBMS_OUTPUT.PUT_LINE(' Remaining NULL records (all statuses): ' || vRecordsToUpdate);
IF vRemainingTargeted > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('NOTE: Some records with targeted statuses still have NULL A_WORKFLOW_HISTORY_KEY.');
DBMS_OUTPUT.PUT_LINE(' This is expected for files not yet ingested into ODS tables');
DBMS_OUTPUT.PUT_LINE(' or ODS tables with a different structure.');
DBMS_OUTPUT.PUT_LINE(' These records will be populated when files are re-processed.');
END IF;
IF vFailedConfigs > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('NOTE: ' || vFailedConfigs || ' configuration(s) failed with unexpected errors.');
DBMS_OUTPUT.PUT_LINE(' Review the ERROR lines above and investigate manually.');
END IF;
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT Existing workflow keys update completed!
PROMPT
-- ============================================================================
-- Step 2: Set PROCESSING_STATUS = 'INGESTED' for records whose workflow
-- completed successfully (mirrors trigger A_WORKFLOW_HISTORY logic)
-- ============================================================================
PROMPT
PROMPT Updating PROCESSING_STATUS to INGESTED for completed workflows...
DECLARE
vUpdatedIngested NUMBER := 0;
BEGIN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
SET sfr.PROCESSING_STATUS = 'INGESTED',
sfr.PROCESS_NAME = (
SELECT wh.service_name
FROM CT_MRDS.A_WORKFLOW_HISTORY wh
WHERE wh.a_workflow_history_key = sfr.a_workflow_history_key
)
WHERE sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL
AND sfr.PROCESSING_STATUS IN ('READY_FOR_INGESTION')
AND EXISTS (
SELECT 1
FROM CT_MRDS.A_WORKFLOW_HISTORY wh
WHERE wh.a_workflow_history_key = sfr.a_workflow_history_key
AND wh.workflow_successful = 'Y'
);
vUpdatedIngested := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Updated PROCESSING_STATUS to INGESTED: ' || vUpdatedIngested || ' record(s)');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT INGESTED status update completed!
PROMPT

View File

@@ -0,0 +1,373 @@
-- ============================================================================
-- MARS-1409 Diagnostic: Workflow key status after step 09
-- ============================================================================
-- Purpose: For each INPUT config with an ODS table, report:
-- [A] Files present in ODS bucket but NOT registered in A_SOURCE_FILE_RECEIVED
-- [B] Files registered in A_SOURCE_FILE_RECEIVED but NOT in ODS bucket
-- [C] Files present in both - with A_WORKFLOW_HISTORY_KEY populated
-- [D] Files present in both - A_WORKFLOW_HISTORY_KEY still NULL
--
-- Can be run at any time, read-only (no DML).
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET LINESIZE 200
PROMPT
PROMPT ============================================================================
PROMPT Diagnosing workflow key status (ODS bucket vs A_SOURCE_FILE_RECEIVED)
PROMPT ============================================================================
PROMPT
DECLARE
TYPE tStringList IS TABLE OF VARCHAR2(500);
vTableName VARCHAR2(200);
vTableExists NUMBER;
vBucketEmpty BOOLEAN;
vRefCursor SYS_REFCURSOR;
vFileName VARCHAR2(500);
-- Per-config counters
vOnlyInBucket NUMBER;
vOnlyInDB NUMBER;
vInBothWithKey NUMBER;
vInBothNoKey NUMBER;
-- Grand totals
vConfigsChecked NUMBER := 0;
vConfigsWithIssues NUMBER := 0;
vTotalOnlyInBucket NUMBER := 0;
vTotalOnlyInDB NUMBER := 0;
vTotalInBothWithKey NUMBER := 0;
vTotalInBothNoKey NUMBER := 0;
-- How many individual file names to print per category before summarising
cMaxPrint CONSTANT NUMBER := 1000;
vPrinted NUMBER;
FUNCTION IS_EXTERNAL_TABLE_EMPTY_ERROR(
pSqlCode NUMBER,
pSqlErrm VARCHAR2
) RETURN BOOLEAN
IS
BEGIN
RETURN pSqlCode IN (-29913, -29400)
OR INSTR(pSqlErrm, 'KUP-05002') > 0;
END;
BEGIN
FOR config_rec IN (
SELECT sfc.A_SOURCE_FILE_CONFIG_KEY,
sfc.A_SOURCE_KEY,
sfc.SOURCE_FILE_ID,
sfc.TABLE_ID
FROM CT_MRDS.A_SOURCE_FILE_CONFIG sfc
WHERE sfc.SOURCE_FILE_TYPE = 'INPUT'
AND sfc.TABLE_ID IS NOT NULL
ORDER BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID, sfc.TABLE_ID
) LOOP
vTableName := 'ODS.' || config_rec.TABLE_ID || '_ODS';
SELECT COUNT(*) INTO vTableExists
FROM ALL_TABLES
WHERE OWNER = 'ODS'
AND TABLE_NAME = config_rec.TABLE_ID || '_ODS';
IF vTableExists = 0 THEN
CONTINUE;
END IF;
-- Check if the bucket location has any files at all
-- (empty bucket raises ORA-29913 instead of returning 0 rows)
vBucketEmpty := FALSE;
BEGIN
EXECUTE IMMEDIATE
'SELECT COUNT(*) FROM ' || vTableName || ' t WHERE ROWNUM = 1'
INTO vTableExists;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
ELSE
RAISE;
END IF;
END;
IF vBucketEmpty THEN
-- Bucket is empty: nothing in ODS, but registered records are all "not in bucket"
vOnlyInBucket := 0;
SELECT COUNT(*) INTO vOnlyInDB
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = config_rec.A_SOURCE_FILE_CONFIG_KEY
AND sfr.PROCESSING_STATUS IN ('VALIDATED','READY_FOR_INGESTION','INGESTED','ARCHIVED','ARCHIVED_AND_TRASHED','ARCHIVED_AND_PURGED');
vInBothWithKey := 0;
vInBothNoKey := 0;
ELSE
BEGIN
-- ----------------------------------------------------------------
-- [A] In ODS bucket but NOT in A_SOURCE_FILE_RECEIVED
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT t.file$name) ' ||
'FROM ' || vTableName || ' t ' ||
'WHERE t.file$name IS NOT NULL ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
' WHERE sfr.SOURCE_FILE_NAME = t.file$name ' ||
' AND sfr.A_SOURCE_FILE_CONFIG_KEY = :1)'
INTO vOnlyInBucket
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [B] In A_SOURCE_FILE_RECEIVED (targeted statuses) but NOT in ODS bucket
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(*) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vOnlyInDB
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [C] In both, A_WORKFLOW_HISTORY_KEY IS NOT NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothWithKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [D] In both, A_WORKFLOW_HISTORY_KEY IS NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothNoKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
vOnlyInBucket := 0;
SELECT COUNT(*) INTO vOnlyInDB
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = config_rec.A_SOURCE_FILE_CONFIG_KEY
AND sfr.PROCESSING_STATUS IN ('VALIDATED','READY_FOR_INGESTION','INGESTED','ARCHIVED','ARCHIVED_AND_TRASHED','ARCHIVED_AND_PURGED');
vInBothWithKey := 0;
vInBothNoKey := 0;
DBMS_OUTPUT.PUT_LINE(' NOTE: ODS bucket became empty/inaccessible during diagnostics for ' || vTableName || '. Falling back to DB-only counts for [B].');
ELSE
RAISE;
END IF;
END;
END IF; -- vBucketEmpty
-- Skip configs with nothing to report
IF vOnlyInBucket = 0 AND vOnlyInDB = 0 AND vInBothWithKey = 0 AND vInBothNoKey = 0 THEN
CONTINUE;
END IF;
vConfigsChecked := vConfigsChecked + 1;
DBMS_OUTPUT.PUT_LINE('Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY ||
'/' || config_rec.SOURCE_FILE_ID ||
'/' || config_rec.TABLE_ID || ')');
DBMS_OUTPUT.PUT_LINE(' [A] In bucket, not registered: ' || vOnlyInBucket);
DBMS_OUTPUT.PUT_LINE(' [B] Registered, not in bucket: ' || vOnlyInDB);
DBMS_OUTPUT.PUT_LINE(' [C] In both, A_WORKFLOW_HISTORY_KEY set: ' || vInBothWithKey);
DBMS_OUTPUT.PUT_LINE(' [D] In both, A_WORKFLOW_HISTORY_KEY NULL: ' || vInBothNoKey);
-- Print individual file names for categories with problems
IF vOnlyInBucket > 0 THEN
DBMS_OUTPUT.PUT_LINE(' [A] Files in bucket not registered:');
vPrinted := 0;
OPEN vRefCursor FOR
'SELECT DISTINCT t.file$name ' ||
'FROM ' || vTableName || ' t ' ||
'WHERE t.file$name IS NOT NULL ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
' WHERE sfr.SOURCE_FILE_NAME = t.file$name ' ||
' AND sfr.A_SOURCE_FILE_CONFIG_KEY = :1) ' ||
'ORDER BY t.file$name'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
LOOP
FETCH vRefCursor INTO vFileName;
EXIT WHEN vRefCursor%NOTFOUND;
vPrinted := vPrinted + 1;
IF vPrinted <= cMaxPrint THEN
DBMS_OUTPUT.PUT_LINE(' ' || vFileName);
ELSIF vPrinted = cMaxPrint + 1 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vOnlyInBucket - cMaxPrint) || ' more');
END IF;
END LOOP;
CLOSE vRefCursor;
END IF;
IF vOnlyInDB > 0 THEN
vConfigsWithIssues := vConfigsWithIssues + 1;
DBMS_OUTPUT.PUT_LINE(' [B] Registered files not found in bucket:');
vPrinted := 0;
BEGIN
IF vBucketEmpty THEN
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
ELSE
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME) ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
END IF;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
DBMS_OUTPUT.PUT_LINE(' NOTE: Skipping ODS anti-join details due to empty/inaccessible external table for ' || vTableName || '.');
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
ELSE
RAISE;
END IF;
END;
LOOP
DECLARE
vStatus VARCHAR2(50);
vWfKey NUMBER;
BEGIN
FETCH vRefCursor INTO vFileName, vStatus, vWfKey;
EXIT WHEN vRefCursor%NOTFOUND;
vPrinted := vPrinted + 1;
IF vPrinted <= cMaxPrint THEN
DBMS_OUTPUT.PUT_LINE(' ' || vFileName ||
' status=' || vStatus ||
' wf_key=' || NVL(TO_CHAR(vWfKey), 'NULL'));
ELSIF vPrinted = cMaxPrint + 1 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vOnlyInDB - cMaxPrint) || ' more');
END IF;
END;
END LOOP;
CLOSE vRefCursor;
END IF;
IF vInBothNoKey > 0 THEN
vConfigsWithIssues := vConfigsWithIssues + 1;
DBMS_OUTPUT.PUT_LINE(' [D] Files in both but A_WORKFLOW_HISTORY_KEY still NULL:');
vPrinted := 0;
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME) ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
LOOP
DECLARE
vStatus VARCHAR2(50);
BEGIN
FETCH vRefCursor INTO vFileName, vStatus;
EXIT WHEN vRefCursor%NOTFOUND;
vPrinted := vPrinted + 1;
IF vPrinted <= cMaxPrint THEN
DBMS_OUTPUT.PUT_LINE(' ' || vFileName || ' status=' || vStatus);
ELSIF vPrinted = cMaxPrint + 1 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vInBothNoKey - cMaxPrint) || ' more');
END IF;
END;
END LOOP;
CLOSE vRefCursor;
END IF;
DBMS_OUTPUT.PUT_LINE('');
-- Accumulate totals
vTotalOnlyInBucket := vTotalOnlyInBucket + vOnlyInBucket;
vTotalOnlyInDB := vTotalOnlyInDB + vOnlyInDB;
vTotalInBothWithKey := vTotalInBothWithKey + vInBothWithKey;
vTotalInBothNoKey := vTotalInBothNoKey + vInBothNoKey;
END LOOP;
DBMS_OUTPUT.PUT_LINE('============================================================================');
DBMS_OUTPUT.PUT_LINE('Grand Summary:');
DBMS_OUTPUT.PUT_LINE(' Configs with data checked: ' || vConfigsChecked);
DBMS_OUTPUT.PUT_LINE(' Configs with issues (B or D): ' || vConfigsWithIssues);
DBMS_OUTPUT.PUT_LINE(' [A] Files in bucket, not registered: ' || vTotalOnlyInBucket);
DBMS_OUTPUT.PUT_LINE(' [B] Registered, not in bucket: ' || vTotalOnlyInDB);
DBMS_OUTPUT.PUT_LINE(' [C] In both - A_WORKFLOW_HISTORY_KEY set: ' || vTotalInBothWithKey);
DBMS_OUTPUT.PUT_LINE(' [D] In both - A_WORKFLOW_HISTORY_KEY NULL: ' || vTotalInBothNoKey);
DBMS_OUTPUT.PUT_LINE('============================================================================');
IF vTotalOnlyInDB > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('WARNING [B]: ' || vTotalOnlyInDB || ' registered file(s) not found in ODS bucket.');
DBMS_OUTPUT.PUT_LINE(' These may have been moved to ARCHIVE or deleted from ODS.');
END IF;
IF vTotalInBothNoKey > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('WARNING [D]: ' || vTotalInBothNoKey || ' file(s) present in both but A_WORKFLOW_HISTORY_KEY is still NULL.');
DBMS_OUTPUT.PUT_LINE(' ODS table rows for these files may have A_WORKFLOW_HISTORY_KEY = NULL.');
DBMS_OUTPUT.PUT_LINE(' Re-run step 09 after the ODS rows are populated by the pipeline.');
END IF;
IF vConfigsWithIssues = 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('OK: No issues found. All registered files in ODS have A_WORKFLOW_HISTORY_KEY assigned.');
END IF;
EXCEPTION
WHEN OTHERS THEN
IF vRefCursor%ISOPEN THEN
CLOSE vRefCursor;
END IF;
DBMS_OUTPUT.PUT_LINE('ERROR: ' || SQLERRM);
RAISE;
END;
/
PROMPT
PROMPT Diagnosis complete.
PROMPT

View File

@@ -0,0 +1,43 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Rollback Step 91: Clear backfilled A_WORKFLOW_HISTORY_KEY values
-- ============================================================================
-- Purpose: Reset A_WORKFLOW_HISTORY_KEY to NULL for all records in
-- A_SOURCE_FILE_RECEIVED. Reverts the backfill performed by
-- 01_MARS_1409_POSTHOOK_update_existing_workflow_keys.sql.
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Note: Records populated by the new trigger (after MARS-1409 install) will also
-- be cleared. The trigger will repopulate them on next file processing.
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Clearing backfilled A_WORKFLOW_HISTORY_KEY values...
DECLARE
vCleared NUMBER := 0;
BEGIN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED
SET A_WORKFLOW_HISTORY_KEY = NULL
WHERE A_WORKFLOW_HISTORY_KEY IS NOT NULL;
vCleared := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Cleared A_WORKFLOW_HISTORY_KEY for ' || vCleared || ' record(s)');
DBMS_OUTPUT.PUT_LINE('Rollback of backfill completed successfully');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT Workflow keys rollback completed!
PROMPT

View File

@@ -0,0 +1,60 @@
# MARS-1409-POSTHOOK: Backfill A_WORKFLOW_HISTORY_KEY for existing records
## Overview
Post-hook for MARS-1409. Backfills `A_WORKFLOW_HISTORY_KEY` in
`CT_MRDS.A_SOURCE_FILE_RECEIVED` for historical records that existed before
MARS-1409 was installed.
Matches records by `SOURCE_FILE_NAME` against `file$name` in the corresponding
ODS table (`ODS.<TABLE_ID>_ODS`) for each `INPUT` source configuration.
## Contents
| File | Description |
|------|-------------|
| `install_mars1409_posthook.sql` | Master installation script (SPOOL, ACCEPT, quit) |
| `rollback_mars1409_posthook.sql` | Master rollback script (SPOOL, ACCEPT, quit) |
| `01_MARS_1409_POSTHOOK_update_existing_workflow_keys.sql` | Backfill UPDATE script |
| `91_MARS_1409_POSTHOOK_rollback_workflow_keys.sql` | Clear backfilled values |
| `track_package_versions.sql` | Universal version tracking (no packages changed) |
| `verify_packages_version.sql` | Universal package verification |
| `README.md` | This file |
## Prerequisites
- MARS-1409 installed (`A_WORKFLOW_HISTORY_KEY` column must exist in `CT_MRDS.A_SOURCE_FILE_RECEIVED`)
- ODS tables populated with ingested data
- ADMIN user with access to CT_MRDS and ODS schemas
## Installation
```powershell
# Execute as ADMIN user
Get-Content "MARS_Packages/REL02_POST/MARS-1409-POSTHOOK/install_mars1409_posthook.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
Log file created automatically: `log/INSTALL_MARS_1409_POSTHOOK_<PDB>_<timestamp>.log`
## What it does
- Iterates all `INPUT` source configurations from `CT_MRDS.A_SOURCE_FILE_CONFIG`
- For each config, joins `A_SOURCE_FILE_RECEIVED` with `ODS.<TABLE_ID>_ODS` on `SOURCE_FILE_NAME = file$name`
- Updates `A_WORKFLOW_HISTORY_KEY` for records with statuses:
`VALIDATED`, `READY_FOR_INGESTION`, `INGESTED`, `ARCHIVED`, `ARCHIVED_AND_TRASHED`, `ARCHIVED_AND_PURGED`
- Skips configs with no NULL records or missing ODS tables
- Prints summary with counts per config and overall totals
## Rollback
```powershell
# Execute as ADMIN user
Get-Content "MARS_Packages/REL02_POST/MARS-1409-POSTHOOK/rollback_mars1409_posthook.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
Rollback clears all non-NULL `A_WORKFLOW_HISTORY_KEY` values from `A_SOURCE_FILE_RECEIVED`.
The trigger installed by MARS-1409 will repopulate new records automatically.
## Related
- MARS-1409: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED (main package)

View File

@@ -0,0 +1,117 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Master Installation Script
-- ============================================================================
-- Purpose: Post-hook for MARS-1409 - Backfill A_WORKFLOW_HISTORY_KEY for
-- existing A_SOURCE_FILE_RECEIVED records by joining with ODS tables.
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Prerequisites: MARS-1409 must be installed first (column must exist)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
SET ECHO OFF
-- Create log directory if it doesn't exist
host mkdir log 2>nul
-- Generate dynamic SPOOL filename with timestamp
var filename VARCHAR2(100)
BEGIN
:filename := 'log/INSTALL_MARS_1409_POSTHOOK_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Installation Starting
PROMPT ============================================================================
PROMPT Purpose: Backfill A_WORKFLOW_HISTORY_KEY for historical records
PROMPT in A_SOURCE_FILE_RECEIVED using matching ODS tables.
PROMPT
PROMPT This script will:
PROMPT - Update A_WORKFLOW_HISTORY_KEY for records with targeted PROCESSING_STATUS
PROMPT - Match records by SOURCE_FILE_NAME against file$name in ODS tables
PROMPT - Skip configs with no NULL records or missing ODS tables
PROMPT
PROMPT Prerequisite: MARS-1409 installed (A_WORKFLOW_HISTORY_KEY column exists)
PROMPT Expected Duration: 30-180 minutes (depends on data volume)
PROMPT ============================================================================
-- Confirm installation with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with installation, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20000, 'Installation aborted by user');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT PREREQUISITE CHECK: Verifying MARS-1409 objects
PROMPT ============================================================================
WHENEVER SQLERROR EXIT SQL.SQLCODE
DECLARE
vColCount NUMBER;
vTableCount NUMBER;
BEGIN
SELECT COUNT(*)
INTO vColCount
FROM ALL_TAB_COLUMNS
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_SOURCE_FILE_RECEIVED'
AND COLUMN_NAME = 'A_WORKFLOW_HISTORY_KEY';
IF vColCount = 0 THEN
RAISE_APPLICATION_ERROR(-20001,
'Prerequisite failed: CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY not found. Install MARS-1409 first (or do not run POSTHOOK after rollback).');
END IF;
SELECT COUNT(*)
INTO vTableCount
FROM ALL_TABLES
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_WORKFLOW_HISTORY';
IF vTableCount = 0 THEN
RAISE_APPLICATION_ERROR(-20002,
'Prerequisite failed: CT_MRDS.A_WORKFLOW_HISTORY table not found.');
END IF;
DBMS_OUTPUT.PUT_LINE('OK: Prerequisites satisfied (MARS-1409 schema changes detected).');
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Backfill A_WORKFLOW_HISTORY_KEY for existing records
PROMPT ============================================================================
@@01_MARS_1409_POSTHOOK_update_existing_workflow_keys.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Diagnose workflow key status
PROMPT ============================================================================
@@02_MARS_1409_POSTHOOK_diagnose_workflow_key_status.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Installation Complete
PROMPT ============================================================================
PROMPT Final Status:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_end FROM DUAL;
PROMPT
PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,69 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Master Rollback Script
-- ============================================================================
-- Purpose: Rollback MARS-1409-POSTHOOK - Clear backfilled A_WORKFLOW_HISTORY_KEY
-- values from A_SOURCE_FILE_RECEIVED.
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Note: This clears ALL non-NULL A_WORKFLOW_HISTORY_KEY values. The trigger
-- installed by MARS-1409 will repopulate them on next file processing.
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
SET ECHO OFF
-- Create log directory if it doesn't exist
host mkdir log 2>nul
-- Generate dynamic SPOOL filename with timestamp
var filename VARCHAR2(100)
BEGIN
:filename := 'log/ROLLBACK_MARS_1409_POSTHOOK_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Rollback Starting
PROMPT ============================================================================
PROMPT This will reverse all changes from MARS-1409-POSTHOOK installation.
PROMPT
PROMPT Rollback steps:
PROMPT 1. Clear A_WORKFLOW_HISTORY_KEY values from A_SOURCE_FILE_RECEIVED
PROMPT ============================================================================
-- Confirm rollback with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with rollback, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20000, 'Rollback aborted by user');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Clear backfilled A_WORKFLOW_HISTORY_KEY values
PROMPT ============================================================================
@@91_MARS_1409_POSTHOOK_rollback_workflow_keys.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Rollback Complete
PROMPT ============================================================================
PROMPT Final Status:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_end FROM DUAL;
PROMPT
PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -1,23 +1,55 @@
-- ============================================================================
-- MARS-1409 Step 01: Add A_WORKFLOW_HISTORY_KEY column
-- MARS-1409 Step 01: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
-- ============================================================================
-- Purpose: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED table
-- Prerequisites: Table A_SOURCE_FILE_RECEIVED exists, A_WORKFLOW_HISTORY table exists
-- Purpose: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
-- using ALTER TABLE to preserve existing data.
-- Prerequisites: A_SOURCE_FILE_CONFIG table exists (FK dependency)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Adding A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED...
PROMPT Adding A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED...
-- Add A_WORKFLOW_HISTORY_KEY column (no FK constraint - workflow history record created later)
ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED ADD (
A_WORKFLOW_HISTORY_KEY NUMBER
);
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED ADD (A_WORKFLOW_HISTORY_KEY NUMBER)';
DBMS_OUTPUT.PUT_LINE('Column A_WORKFLOW_HISTORY_KEY added to A_SOURCE_FILE_RECEIVED.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -1430 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY already exists in A_SOURCE_FILE_RECEIVED.');
ELSE
RAISE;
END IF;
END;
/
-- Add column comment
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY IS
'Direct link to workflow history - each file has exactly one workflow execution. Populated during VALIDATE_SOURCE_FILE_RECEIVED (MARS-1409)';
PROMPT A_WORKFLOW_HISTORY_KEY column added successfully!
PROMPT
PROMPT Adding comment on A_WORKFLOW_HISTORY_KEY...
BEGIN
EXECUTE IMMEDIATE q'[COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY IS 'Direct link to workflow history - each file has exactly one workflow execution. Populated during VALIDATE_SOURCE_FILE_RECEIVED (MARS-1409)']';
DBMS_OUTPUT.PUT_LINE('Comment on A_WORKFLOW_HISTORY_KEY added.');
END;
/
PROMPT
PROMPT Renaming IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH in A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG RENAME COLUMN IS_KEEP_IN_TRASH TO IS_KEPT_IN_TRASH';
DBMS_OUTPUT.PUT_LINE('Column IS_KEEP_IN_TRASH renamed to IS_KEPT_IN_TRASH in A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_KEEP_IN_TRASH does not exist (already renamed or not present).');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Step 01 completed: A_WORKFLOW_HISTORY_KEY column added and IS_KEEP_IN_TRASH renamed to IS_KEPT_IN_TRASH.
PROMPT

View File

@@ -0,0 +1,104 @@
-- ============================================================================
-- MARS-1409 Step 10: Update A_TABLE_STAT, A_TABLE_STAT_HIST, A_SOURCE_FILE_CONFIG
-- ============================================================================
-- Purpose: Apply MARS-1409 table changes:
-- - A_TABLE_STAT and A_TABLE_STAT_HIST: DROP and recreate from new_version
-- (stats tables with no critical persistent data)
-- - A_SOURCE_FILE_CONFIG: ALTER TABLE ADD IS_WORKFLOW_SUCCESS_REQUIRED column
-- (preserves existing configuration data)
-- - A_SOURCE_FILE_RECEIVED: no changes in this step
-- Prerequisites: A_SOURCE table exists (FK parent of A_SOURCE_FILE_CONFIG)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT_HIST
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT_HIST...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT_HIST';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT_HIST dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT_HIST does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- ADD IS_WORKFLOW_SUCCESS_REQUIRED to A_SOURCE_FILE_CONFIG
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Adding IS_WORKFLOW_SUCCESS_REQUIRED to A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD ('
|| ' IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT ''Y'' NOT NULL '
|| ' CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN (''Y'', ''N''))'
|| ')';
DBMS_OUTPUT.PUT_LINE('Column IS_WORKFLOW_SUCCESS_REQUIRED added to A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -1430 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_WORKFLOW_SUCCESS_REQUIRED already exists in A_SOURCE_FILE_CONFIG.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Adding comment on IS_WORKFLOW_SUCCESS_REQUIRED...
BEGIN
EXECUTE IMMEDIATE q'[COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409']';
DBMS_OUTPUT.PUT_LINE('Comment on IS_WORKFLOW_SUCCESS_REQUIRED added.');
END;
/
-- ----------------------------------------------------------------------------
-- RECREATE A_TABLE_STAT and A_TABLE_STAT_HIST from new_version
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Creating A_TABLE_STAT (new_version)...
@@new_version/A_TABLE_STAT.sql
PROMPT
PROMPT Creating A_TABLE_STAT_HIST (new_version)...
@@new_version/A_TABLE_STAT_HIST.sql
PROMPT
PROMPT Step 10 completed: A_TABLE_STAT and A_TABLE_STAT_HIST recreated from new_version scripts,
PROMPT IS_WORKFLOW_SUCCESS_REQUIRED column added to A_SOURCE_FILE_CONFIG (MARS-1409).
PROMPT

View File

@@ -1,12 +1,12 @@
-- ============================================================================
-- MARS-1409 Step 08: Install TRG_A_WORKFLOW_HISTORY trigger
-- MARS-1409 Step 08: Install A_WORKFLOW_HISTORY trigger
-- ============================================================================
-- Purpose: Update trigger to mark A_SOURCE_FILE_RECEIVED as INGESTED
-- when WORKFLOW_SUCCESSFUL is set to 'Y'
-- ============================================================================
PROMPT Installing TRG_A_WORKFLOW_HISTORY (new_version)...
@@new_version/TRG_A_WORKFLOW_HISTORY.sql
PROMPT Installing A_WORKFLOW_HISTORY (new_version)...
@@new_version/A_WORKFLOW_HISTORY.sql
PROMPT
DECLARE
@@ -15,11 +15,14 @@ BEGIN
SELECT status INTO v_status
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'TRG_A_WORKFLOW_HISTORY'
AND object_name = 'A_WORKFLOW_HISTORY'
AND object_type = 'TRIGGER';
DBMS_OUTPUT.PUT_LINE('TRG_A_WORKFLOW_HISTORY status: ' || v_status);
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY status: ' || v_status);
IF v_status != 'VALID' THEN
RAISE_APPLICATION_ERROR(-20002, 'ERROR: A_WORKFLOW_HISTORY compiled with errors (status=' || v_status || '). Installation aborted.');
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE_APPLICATION_ERROR(-20001, 'ERROR: TRG_A_WORKFLOW_HISTORY not found after installation');
RAISE_APPLICATION_ERROR(-20001, 'ERROR: A_WORKFLOW_HISTORY not found after installation');
END;
/

View File

@@ -1,150 +0,0 @@
-- ============================================================================
-- MARS-1409 Step 09: Update A_WORKFLOW_HISTORY_KEY for existing records
-- ============================================================================
-- Purpose: Populate A_WORKFLOW_HISTORY_KEY for existing A_SOURCE_FILE_RECEIVED records
-- by extracting values from corresponding ODS tables
-- Prerequisites:
-- - A_WORKFLOW_HISTORY_KEY column exists in A_SOURCE_FILE_RECEIVED
-- - ODS tables contain A_WORKFLOW_HISTORY_KEY and file$name columns
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT Updating A_WORKFLOW_HISTORY_KEY for existing A_SOURCE_FILE_RECEIVED records...
DECLARE
vUpdatedTotal NUMBER := 0;
vUpdatedCurrent NUMBER := 0;
vFailedConfigs NUMBER := 0;
vSkippedConfigs NUMBER := 0;
vTableName VARCHAR2(200);
vSQL VARCHAR2(4000);
vRecordsToUpdate NUMBER := 0;
BEGIN
-- Count total records to update
SELECT COUNT(*) INTO vRecordsToUpdate
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL
AND PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED');
DBMS_OUTPUT.PUT_LINE('Found ' || vRecordsToUpdate || ' records with NULL A_WORKFLOW_HISTORY_KEY');
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
-- Process each INPUT configuration
FOR config_rec IN (
SELECT DISTINCT
sfc.A_SOURCE_FILE_CONFIG_KEY,
sfc.A_SOURCE_KEY,
sfc.SOURCE_FILE_ID,
sfc.TABLE_ID,
sfc.TEMPLATE_TABLE_NAME,
(SELECT COUNT(*)
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = sfc.A_SOURCE_FILE_CONFIG_KEY
AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL
AND sfr.PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED')
) as NULL_COUNT
FROM CT_MRDS.A_SOURCE_FILE_CONFIG sfc
WHERE sfc.SOURCE_FILE_TYPE = 'INPUT'
AND sfc.TABLE_ID IS NOT NULL
ORDER BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID, sfc.TABLE_ID
) LOOP
IF config_rec.NULL_COUNT = 0 THEN
vSkippedConfigs := vSkippedConfigs + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - no records to update');
CONTINUE;
END IF;
BEGIN
-- Construct ODS table name from TABLE_ID (ODS tables have _ODS suffix)
vTableName := 'ODS.' || config_rec.TABLE_ID || '_ODS';
DBMS_OUTPUT.PUT_LINE('Processing config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID || ')...');
-- Try to update using ODS table
-- Uses MIN to handle edge case of multiple workflow keys (shouldn't happen, but defensive)
vSQL :=
'UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'SET A_WORKFLOW_HISTORY_KEY = ( ' ||
' SELECT MIN(t.A_WORKFLOW_HISTORY_KEY) ' ||
' FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND t.A_WORKFLOW_HISTORY_KEY IS NOT NULL ' ||
') ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :config_key ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'', ''READY_FOR_INGESTION'', ''INGESTED'', ''ARCHIVED'', ''ARCHIVED_AND_TRASHED'', ''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND t.A_WORKFLOW_HISTORY_KEY IS NOT NULL ' ||
' )';
EXECUTE IMMEDIATE vSQL USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
vUpdatedCurrent := SQL%ROWCOUNT;
vUpdatedTotal := vUpdatedTotal + vUpdatedCurrent;
IF vUpdatedCurrent > 0 THEN
DBMS_OUTPUT.PUT_LINE(' SUCCESS: Updated ' || vUpdatedCurrent || ' record(s)');
ELSE
DBMS_OUTPUT.PUT_LINE(' INFO: No matching records found in ODS table (files may not be ingested yet)');
END IF;
EXCEPTION
WHEN OTHERS THEN
vFailedConfigs := vFailedConfigs + 1;
DBMS_OUTPUT.PUT_LINE(' ERROR: Failed for config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (table: ' || vTableName || ')');
DBMS_OUTPUT.PUT_LINE(' Reason: ' || SQLERRM);
-- Continue processing other configurations despite this failure
END;
END LOOP;
COMMIT;
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
DBMS_OUTPUT.PUT_LINE('Update Summary:');
DBMS_OUTPUT.PUT_LINE(' Total records updated: ' || vUpdatedTotal);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (no NULL records): ' || vSkippedConfigs);
DBMS_OUTPUT.PUT_LINE(' Configurations failed: ' || vFailedConfigs);
-- Check remaining NULL records
SELECT COUNT(*) INTO vRecordsToUpdate
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL
-- AND PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED')
;
DBMS_OUTPUT.PUT_LINE(' Remaining NULL records: ' || vRecordsToUpdate);
IF vRecordsToUpdate > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('NOTE: Some records still have NULL A_WORKFLOW_HISTORY_KEY.');
DBMS_OUTPUT.PUT_LINE(' This is expected for:');
DBMS_OUTPUT.PUT_LINE(' - Files not yet ingested into ODS tables');
DBMS_OUTPUT.PUT_LINE(' - Files with status RECEIVED or VALIDATION_FAILED');
DBMS_OUTPUT.PUT_LINE(' - ODS tables that do not exist or have different structure');
DBMS_OUTPUT.PUT_LINE(' These records will be populated when files are processed through VALIDATE_SOURCE_FILE_RECEIVED');
END IF;
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT Existing workflow keys update completed!
PROMPT

View File

@@ -93,9 +93,34 @@ WHERE owner = 'CT_MRDS'
AND name = 'FILE_ARCHIVER'
ORDER BY type, line, position;
-- Check DATA_EXPORTER compilation status
PROMPT
PROMPT 5C. Checking DATA_EXPORTER package compilation...
SELECT
object_name,
object_type,
status,
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'DATA_EXPORTER'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
SELECT
name,
type,
line,
position,
text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name = 'DATA_EXPORTER'
ORDER BY type, line, position;
-- Check trigger status
PROMPT
PROMPT 5B. Checking TRG_A_WORKFLOW_HISTORY trigger...
PROMPT 5B. Checking A_WORKFLOW_HISTORY trigger...
SELECT
trigger_name,
trigger_type,
@@ -103,7 +128,7 @@ SELECT
status
FROM all_triggers
WHERE owner = 'CT_MRDS'
AND trigger_name = 'TRG_A_WORKFLOW_HISTORY';
AND trigger_name = 'A_WORKFLOW_HISTORY';
-- Verify package versions
PROMPT
@@ -112,7 +137,9 @@ SELECT 'FILE_MANAGER' AS PACKAGE_NAME, CT_MRDS.FILE_MANAGER.GET_VERSION() AS V
UNION ALL
SELECT 'ENV_MANAGER' AS PACKAGE_NAME, CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'DATA_EXPORTER' AS PACKAGE_NAME, CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
PROMPT
PROMPT ============================================================================

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Step 11: Install DATA_EXPORTER Package Specification
-- ============================================================================
-- Script: 11_MARS_1409_install_CT_MRDS_DATA_EXPORTER_SPEC.sql
-- Description: Install DATA_EXPORTER package specification (new version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Installing DATA_EXPORTER package specification...
PROMPT ============================================================================
@@new_version/DATA_EXPORTER.pkg
PROMPT DATA_EXPORTER specification installed
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Step 12: Install DATA_EXPORTER Package Body
-- ============================================================================
-- Script: 12_MARS_1409_install_CT_MRDS_DATA_EXPORTER_BODY.sql
-- Description: Install DATA_EXPORTER package body (new version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Installing DATA_EXPORTER package body...
PROMPT ============================================================================
@@new_version/DATA_EXPORTER.pkb
PROMPT DATA_EXPORTER body installed
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 83_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_SPEC.sql
-- Description: Restore DATA_EXPORTER package specification (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring DATA_EXPORTER package specification...
PROMPT ============================================================================
@@rollback_version/DATA_EXPORTER.pkg
PROMPT DATA_EXPORTER specification restored
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 84_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_BODY.sql
-- Description: Restore DATA_EXPORTER package body (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring DATA_EXPORTER package body...
PROMPT ============================================================================
@@rollback_version/DATA_EXPORTER.pkb
PROMPT DATA_EXPORTER body restored
/

View File

@@ -34,19 +34,19 @@ END;
-- Check trigger was restored
PROMPT
PROMPT 1B. Checking TRG_A_WORKFLOW_HISTORY trigger status...
PROMPT 1B. Checking A_WORKFLOW_HISTORY trigger status...
DECLARE
v_status VARCHAR2(20);
BEGIN
SELECT status INTO v_status
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'TRG_A_WORKFLOW_HISTORY'
AND object_name = 'A_WORKFLOW_HISTORY'
AND object_type = 'TRIGGER';
DBMS_OUTPUT.PUT_LINE('TRG_A_WORKFLOW_HISTORY status: ' || v_status);
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY status: ' || v_status);
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('WARNING: TRG_A_WORKFLOW_HISTORY not found');
DBMS_OUTPUT.PUT_LINE('WARNING: A_WORKFLOW_HISTORY not found');
END;
/
@@ -60,7 +60,7 @@ SELECT
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_name, object_type;
@@ -70,7 +70,7 @@ PROMPT 3. Checking for compilation errors...
SELECT name, type, line, position, text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
ORDER BY name, type, line, position;
-- Verify package versions
@@ -80,7 +80,9 @@ SELECT 'FILE_MANAGER' AS PACKAGE_NAME, CT_MRDS.FILE_MANAGER.GET_VERSION() AS V
UNION ALL
SELECT 'ENV_MANAGER' AS PACKAGE_NAME, CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'DATA_EXPORTER' AS PACKAGE_NAME, CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
PROMPT
PROMPT ============================================================================

View File

@@ -1,25 +1,26 @@
-- ============================================================================
-- MARS-1409 Rollback 93C: Restore TRG_A_WORKFLOW_HISTORY trigger
-- MARS-1409 Rollback 93C: Restore A_WORKFLOW_HISTORY trigger
-- ============================================================================
-- Purpose: Restore trigger to pre-MARS-1409 state
-- Removes INGESTED status update logic from A_SOURCE_FILE_RECEIVED
-- ============================================================================
PROMPT Restoring TRG_A_WORKFLOW_HISTORY (rollback_version)...
@@rollback_version/TRG_A_WORKFLOW_HISTORY.sql
PROMPT Restoring trigger A_WORKFLOW_HISTORY (rollback_version)...
@@rollback_version/A_WORKFLOW_HISTORY.sql
PROMPT
DECLARE
v_status VARCHAR2(20);
BEGIN
-- After rollback the trigger is restored under its original name: a_workflow_history
SELECT status INTO v_status
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'TRG_A_WORKFLOW_HISTORY'
AND object_name = 'A_WORKFLOW_HISTORY'
AND object_type = 'TRIGGER';
DBMS_OUTPUT.PUT_LINE('TRG_A_WORKFLOW_HISTORY restored, status: ' || v_status);
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY (original trigger) restored, status: ' || v_status);
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE_APPLICATION_ERROR(-20001, 'ERROR: TRG_A_WORKFLOW_HISTORY not found after rollback');
RAISE_APPLICATION_ERROR(-20001, 'ERROR: A_WORKFLOW_HISTORY not found after rollback');
END;
/

View File

@@ -0,0 +1,92 @@
-- ============================================================================
-- MARS-1409 Rollback Step 100: Restore A_TABLE_STAT, A_TABLE_STAT_HIST,
-- remove IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG
-- ============================================================================
-- Purpose: Rollback of step 10:
-- - A_TABLE_STAT and A_TABLE_STAT_HIST: DROP and recreate from rollback_version
-- - A_SOURCE_FILE_CONFIG: ALTER TABLE DROP COLUMN IS_WORKFLOW_SUCCESS_REQUIRED
-- (preserves existing configuration data)
-- - A_SOURCE_FILE_RECEIVED: no changes in this step
-- Prerequisites: Step 10 was applied
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT_HIST
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT_HIST...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT_HIST';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT_HIST dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT_HIST does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP COLUMN IS_WORKFLOW_SUCCESS_REQUIRED';
DBMS_OUTPUT.PUT_LINE('Column IS_WORKFLOW_SUCCESS_REQUIRED dropped from A_SOURCE_FILE_CONFIG (CHK_IS_WORKFLOW_SUCCESS_REQUIRED constraint dropped automatically).');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_WORKFLOW_SUCCESS_REQUIRED does not exist in A_SOURCE_FILE_CONFIG.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- RECREATE A_TABLE_STAT and A_TABLE_STAT_HIST from rollback_version
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Creating A_TABLE_STAT (rollback_version - pre-MARS-1409 structure)...
@@rollback_version/A_TABLE_STAT.sql
PROMPT
PROMPT Creating A_TABLE_STAT_HIST (rollback_version - pre-MARS-1409 structure)...
@@rollback_version/A_TABLE_STAT_HIST.sql
PROMPT
PROMPT Rollback Step 100 completed: A_TABLE_STAT and A_TABLE_STAT_HIST restored to pre-MARS-1409
PROMPT structure, IS_WORKFLOW_SUCCESS_REQUIRED column removed from A_SOURCE_FILE_CONFIG.
PROMPT

View File

@@ -1,54 +0,0 @@
-- ============================================================================
-- MARS-1409 Rollback 91A: Clear A_WORKFLOW_HISTORY_KEY for existing records
-- ============================================================================
-- Purpose: Set A_WORKFLOW_HISTORY_KEY to NULL for all existing records
-- This is part of rollback process - restores state before migration
-- Note: Cannot restore exact previous values (we don't track which were NULL)
-- This script sets ALL values to NULL to ensure clean rollback state
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT Clearing A_WORKFLOW_HISTORY_KEY for all A_SOURCE_FILE_RECEIVED records...
DECLARE
vTotalRecords NUMBER := 0;
vClearedRecords NUMBER := 0;
BEGIN
-- Count total records
SELECT COUNT(*) INTO vTotalRecords
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED;
DBMS_OUTPUT.PUT_LINE('Total records in A_SOURCE_FILE_RECEIVED: ' || vTotalRecords);
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
-- Clear A_WORKFLOW_HISTORY_KEY for all records
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED
SET A_WORKFLOW_HISTORY_KEY = NULL
WHERE A_WORKFLOW_HISTORY_KEY IS NOT NULL;
vClearedRecords := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Rollback Summary:');
DBMS_OUTPUT.PUT_LINE(' Records with A_WORKFLOW_HISTORY_KEY cleared: ' || vClearedRecords);
DBMS_OUTPUT.PUT_LINE(' Records already NULL: ' || (vTotalRecords - vClearedRecords));
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
DBMS_OUTPUT.PUT_LINE('NOTE: All A_WORKFLOW_HISTORY_KEY values set to NULL');
DBMS_OUTPUT.PUT_LINE(' Original values cannot be restored (not tracked before migration)');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT Workflow keys cleared successfully!
PROMPT

View File

@@ -1,17 +1,46 @@
-- ============================================================================
-- MARS-1409 Rollback 91: Drop A_WORKFLOW_HISTORY_KEY column
-- MARS-1409 Rollback 99: Remove A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- ============================================================================
-- Purpose: Remove A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- Purpose: Drop A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- using ALTER TABLE to preserve existing data.
-- Prerequisites: A_SOURCE_FILE_CONFIG table exists (FK dependency)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Dropping A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED...
PROMPT Dropping A_WORKFLOW_HISTORY_KEY from A_SOURCE_FILE_RECEIVED...
-- Drop A_WORKFLOW_HISTORY_KEY column (no FK constraint to drop first)
ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED
DROP COLUMN A_WORKFLOW_HISTORY_KEY;
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED DROP COLUMN A_WORKFLOW_HISTORY_KEY';
DBMS_OUTPUT.PUT_LINE('Column A_WORKFLOW_HISTORY_KEY dropped from A_SOURCE_FILE_RECEIVED.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY does not exist in A_SOURCE_FILE_RECEIVED.');
ELSE
RAISE;
END IF;
END;
/
PROMPT A_WORKFLOW_HISTORY_KEY column dropped successfully!
PROMPT
PROMPT Renaming IS_KEPT_IN_TRASH back to IS_KEEP_IN_TRASH in A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG RENAME COLUMN IS_KEPT_IN_TRASH TO IS_KEEP_IN_TRASH';
DBMS_OUTPUT.PUT_LINE('Column IS_KEPT_IN_TRASH renamed back to IS_KEEP_IN_TRASH in A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_KEPT_IN_TRASH does not exist (already renamed back or not present).');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Rollback 99 completed: A_WORKFLOW_HISTORY_KEY removed and IS_KEPT_IN_TRASH renamed back to IS_KEEP_IN_TRASH.
PROMPT

View File

@@ -31,9 +31,9 @@ PROMPT =========================================================================
PROMPT MARS-1409 Installation Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_MANAGER v3.X.X
PROMPT Change: Add A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED
PROMPT Purpose: Direct tracking of workflow history keys in file registration
PROMPT Steps: 11 (DDL, ENV_MANAGER Update, FILE_MANAGER Update, FILE_ARCHIVER Update, Trigger Update, Existing Records Backfill, Verification, Tracking)
PROMPT Change: Add A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED; add ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED and WORKFLOW_SUCCESS_* columns to A_TABLE_STAT/HIST
PROMPT Purpose: Direct tracking of workflow history keys in file registration; self-documenting statistics records; separate total vs workflow-success statistics
PROMPT Steps: 14 (DDL x2, ENV_MANAGER Update, FILE_MANAGER Update, FILE_ARCHIVER Update, DATA_EXPORTER Update, Trigger Update, Verification, Tracking, Version Verification)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_start FROM DUAL;
PROMPT ============================================================================
@@ -56,64 +56,82 @@ PROMPT =========================================================================
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Update ENV_MANAGER package specification
PROMPT STEP 2: Add ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED and WORKFLOW_SUCCESS_FILE_COUNT/ROW_COUNT/SIZE columns to A_TABLE_STAT and A_TABLE_STAT_HIST
PROMPT ============================================================================
@@02_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
@@02_MARS_1409_add_archival_strategy_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 3: Update ENV_MANAGER package body
PROMPT STEP 3: Update ENV_MANAGER package specification
PROMPT ============================================================================
@@03_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql
@@03_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Update FILE_MANAGER package specification
PROMPT STEP 4: Update ENV_MANAGER package body
PROMPT ============================================================================
@@04_MARS_1409_install_CT_MRDS_FILE_MANAGER_SPEC.sql
@@04_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Update FILE_MANAGER package body
PROMPT STEP 5: Update FILE_MANAGER package specification
PROMPT ============================================================================
@@05_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
@@05_MARS_1409_install_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Update FILE_ARCHIVER package specification
PROMPT STEP 6: Update FILE_MANAGER package body
PROMPT ============================================================================
@@06_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
@@06_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Update FILE_ARCHIVER package body
PROMPT STEP 7: Update FILE_ARCHIVER package specification
PROMPT ============================================================================
@@07_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
@@07_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Update TRG_A_WORKFLOW_HISTORY trigger
PROMPT STEP 8: Update FILE_ARCHIVER package body
PROMPT ============================================================================
@@08_MARS_1409_install_CT_MRDS_TRG_A_WORKFLOW_HISTORY.sql
@@08_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Update A_WORKFLOW_HISTORY_KEY for existing records
PROMPT STEP 9: Install DATA_EXPORTER package specification
PROMPT ============================================================================
@@09_MARS_1409_update_existing_workflow_keys.sql
@@11_MARS_1409_install_CT_MRDS_DATA_EXPORTER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Verify installation
PROMPT STEP 10: Install DATA_EXPORTER package body
PROMPT ============================================================================
@@12_MARS_1409_install_CT_MRDS_DATA_EXPORTER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Update A_WORKFLOW_HISTORY trigger
PROMPT ============================================================================
@@09_MARS_1409_install_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 12: Verify installation
PROMPT ============================================================================
@@10_MARS_1409_verify_installation.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Track package versions
PROMPT STEP 13: Track package versions
PROMPT ============================================================================
@@track_package_versions.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 14: Verify package versions
PROMPT ============================================================================
@@verify_packages_version.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409 Installation Complete
@@ -125,3 +143,5 @@ PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,106 @@
-- ====================================================================
-- A_SOURCE_FILE_CONFIG Table
-- ====================================================================
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- MARS-1409: Added IS_WORKFLOW_SUCCESS_REQUIRED flag for workflow bypass
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_KEY VARCHAR2(30) NOT NULL ENABLE,
SOURCE_FILE_TYPE VARCHAR2(200), -- Can be 'INPUT' or 'CONTAINER' or 'LOAD_CONFIG'
SOURCE_FILE_ID VARCHAR2(200),
SOURCE_FILE_DESC VARCHAR2(2000),
SOURCE_FILE_NAME_PATTERN VARCHAR2(200),
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEPT_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT 'Y' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEPT_IN_TRASH CHECK (IS_KEPT_IN_TRASH IN ('Y', 'N')),
CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_CONFIG_UQ1 UNIQUE(SOURCE_FILE_TYPE, SOURCE_FILE_ID, TABLE_ID)
) TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS
'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,56 @@
-- ====================================================================
-- A_TABLE_STAT Table
-- ====================================================================
-- Purpose: Store current table statistics and archival thresholds
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT (
-- === Identity / metadata ===
A_TABLE_STAT_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
-- === Archival configuration snapshot (values at gather time) ===
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
-- === Total statistics (all files, no workflow filter) ===
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
TOTAL_SIZE NUMBER(38,0),
-- === Over-archival-threshold statistics ===
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_TOTAL_SIZE NUMBER(38,0),
-- === Workflow-success statistics (WORKFLOW_SUCCESSFUL='Y' files only) ===
WORKFLOW_SUCCESS_FILE_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_ROW_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_TOTAL_SIZE NUMBER(38,0),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
-- Identity / metadata
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.A_TABLE_STAT_KEY IS 'Primary key, populated from A_TABLE_STAT_KEY_SEQ sequence.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.A_SOURCE_FILE_CONFIG_KEY IS 'Foreign key to A_SOURCE_FILE_CONFIG; one current-stat row per config entry (unique constraint A_TABLE_STAT_UK1).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.TABLE_NAME IS 'Fully qualified ODS external table name (SCHEMA.TABLE) for which statistics were gathered.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.CREATED IS 'Timestamp when the statistics were gathered by FILE_ARCHIVER.GATHER_TABLE_STAT.';
-- Archival configuration snapshot
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCHIVAL_STRATEGY IS 'Archival strategy active when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months copied from A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_THRESHOLD_DAYS IS 'Archive threshold in days copied from A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS. Used by THRESHOLD_BASED and HYBRID strategies.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Snapshot of A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED at gather time. Y = OVER_ARCH_THRESOLD counts include only files with WORKFLOW_SUCCESSFUL=Y. Added MARS-1409.';
-- Total statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.FILE_COUNT IS 'Total number of files present in the ODS external table, regardless of workflow success status.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ROW_COUNT IS 'Total row count across all files in the ODS external table.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.TOTAL_SIZE IS 'Total size in bytes of all files in the ODS bucket location.';
-- Over-threshold statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_FILE_COUNT IS 'Number of files that satisfy the archival threshold condition. When IS_WORKFLOW_SUCCESS_REQUIRED=Y, also requires WORKFLOW_SUCCESSFUL=Y.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_ROW_COUNT IS 'Row count for files that satisfy the archival threshold condition.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_TOTAL_SIZE IS 'Size in bytes for files that satisfy the archival threshold condition.';
-- Workflow-success statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_FILE_COUNT IS 'Count of files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_ROW_COUNT IS 'Row count for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_TOTAL_SIZE IS 'Size in bytes for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
-- Note: A_TABLE_STAT_UK1 index is auto-created by the UNIQUE constraint definition above.

View File

@@ -0,0 +1,53 @@
-- ====================================================================
-- A_TABLE_STAT_HIST Table
-- ====================================================================
-- Purpose: Store historical table statistics for trend analysis
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
-- === Identity / metadata ===
A_TABLE_STAT_HIST_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
-- === Archival configuration snapshot (values at gather time) ===
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
-- === Total statistics (all files, no workflow filter) ===
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
TOTAL_SIZE NUMBER(38,0),
-- === Over-archival-threshold statistics ===
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_TOTAL_SIZE NUMBER(38,0),
-- === Workflow-success statistics (WORKFLOW_SUCCESSFUL='Y' files only) ===
WORKFLOW_SUCCESS_FILE_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_ROW_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_TOTAL_SIZE NUMBER(38,0)
) TABLESPACE "DATA";
-- Identity / metadata
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.A_TABLE_STAT_HIST_KEY IS 'Primary key, populated from A_TABLE_STAT_KEY_SEQ sequence (shared with A_TABLE_STAT).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.A_SOURCE_FILE_CONFIG_KEY IS 'Foreign key to A_SOURCE_FILE_CONFIG. Multiple history rows per config entry (no unique constraint).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.TABLE_NAME IS 'Fully qualified ODS external table name (SCHEMA.TABLE) for which statistics were gathered.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.CREATED IS 'Timestamp when the statistics snapshot was taken by FILE_ARCHIVER.GATHER_TABLE_STAT.';
-- Archival configuration snapshot
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCHIVAL_STRATEGY IS 'Archival strategy active when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months copied from A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_THRESHOLD_DAYS IS 'Archive threshold in days copied from A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS. Used by THRESHOLD_BASED and HYBRID strategies.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Snapshot of A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED at gather time. Y = OVER_ARCH_THRESOLD counts include only files with WORKFLOW_SUCCESSFUL=Y. Added MARS-1409.';
-- Total statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.FILE_COUNT IS 'Total number of files present in the ODS external table at gather time, regardless of workflow success status.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ROW_COUNT IS 'Total row count across all files in the ODS external table at gather time.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.TOTAL_SIZE IS 'Total size in bytes of all files in the ODS bucket location at gather time.';
-- Over-threshold statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_FILE_COUNT IS 'Number of files that satisfied the archival threshold condition. When IS_WORKFLOW_SUCCESS_REQUIRED=Y, also required WORKFLOW_SUCCESSFUL=Y.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_ROW_COUNT IS 'Row count for files that satisfied the archival threshold condition.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_TOTAL_SIZE IS 'Size in bytes for files that satisfied the archival threshold condition.';
-- Workflow-success statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_FILE_COUNT IS 'Count of files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_ROW_COUNT IS 'Row count for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_TOTAL_SIZE IS 'Size in bytes for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';

View File

@@ -0,0 +1,62 @@
WHENEVER SQLERROR CONTINUE
GRANT SELECT, INSERT, UPDATE, DELETE ON ct_ods.a_load_history TO ct_mrds;
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ============================================================================
-- A_WORKFLOW_HISTORY Trigger Definition
-- ============================================================================
CREATE OR REPLACE EDITIONABLE TRIGGER "CT_MRDS"."A_WORKFLOW_HISTORY"
AFTER INSERT OR UPDATE OF workflow_successful ON ct_mrds.a_workflow_history
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
IF :new.service_name = 'ODS' AND :new.workflow_name IN (
'w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL',
'w_ODS_TMS_LIMIT_ACCESS', 'w_ODS_TMS_PORTFOLIO_ACCESS', 'w_ODS_TMS_PORTFOLIO_TREE',
'w_ODS_TMS_COLLATERAL_INVENTORY', 'w_ODS_TOP_FULLBIDARRAY_COMPILED', 'w_ODS_TOP_ANNOUNCEMENT',
'w_ODS_TOP_ALLOTMENT_MODIFICATIONS', 'w_ODS_TOP_ALLOTMENT', 'w_ODS_CEPH_PRICING', 'w_ODS_C2D_MPEC'
) THEN
IF :new.workflow_successful = 'Y' AND :new.workflow_successful <> NVL(:old.workflow_successful, 'N') THEN
CASE
WHEN :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
WHEN :new.workflow_name = 'w_ODS_TMS_LIMIT_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_LIMITACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_TREE' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOTREE';
WHEN :new.workflow_name = 'w_ODS_TMS_COLLATERAL_INVENTORY' THEN v_workflow_name := 'w_ODS_TMS_RAR_RARCOLLATERALINVENTORY';
WHEN :new.workflow_name = 'w_ODS_TOP_FULLBIDARRAY_COMPILED' THEN v_workflow_name := 'w_ODS_TOP_FULLBIDARRAY_COMPILED';
WHEN :new.workflow_name = 'w_ODS_TOP_ANNOUNCEMENT' THEN v_workflow_name := 'w_ODS_TOP_ANNOUNCEMENT';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT';
WHEN :new.workflow_name = 'w_ODS_CEPH_PRICING' THEN v_workflow_name := 'w_ODS_CEPH_PRICING';
WHEN :new.workflow_name = 'w_ODS_C2D_MPEC' THEN v_workflow_name := 'w_ODS_C2D_MPEC';
ELSE
v_workflow_name := :new.workflow_name;
END CASE;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION WHEN OTHERS THEN NULL;
END;
INSERT INTO ct_ods.a_load_history (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end, exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end, NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
-- MARS-1409: When workflow completes successfully, mark linked files as INGESTED
IF :new.workflow_successful = 'Y' THEN
IF INSERTING OR (UPDATING AND (:old.workflow_successful IS NULL OR :old.workflow_successful != 'Y')) THEN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED
SET PROCESSING_STATUS = 'INGESTED',
PROCESS_NAME = :new.service_name
WHERE A_WORKFLOW_HISTORY_KEY = :new.a_workflow_history_key;
END IF;
END IF;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(19) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(50) := 'MRDS Development Team';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.6.3 (2026-01-28): COMPILATION FIX - Resolved ORA-00904 error in EXPORT_PARTITION_PARALLEL. SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE cannot be used directly in SQL UPDATE statements. Now properly assigned to vgMsgTmp variable before UPDATE.' || CHR(10) ||
'v2.6.2 (2026-01-28): CRITICAL FIX - Race condition when multiple exports run simultaneously. Changed DELETE to filter by age (>24h) instead of deleting all COMPLETED chunks. Prevents concurrent sessions from deleting each other chunks. Session-safe cleanup with TASK_NAME filtering. Enables true parallel execution of multiple export jobs.' || CHR(10) ||
'v2.6.1 (2026-01-28): Added DELETE_FAILED_EXPORT_FILE procedure to clean up partial/corrupted files before retry. When partition fails mid-export, partial file is deleted before retry to prevent Oracle from creating _1 suffixed duplicates. Ensures clean retry without orphaned files in OCI bucket.' || CHR(10) ||
'v2.6.0 (2026-01-28): CRITICAL FIX - Added STATUS tracking to A_PARALLEL_EXPORT_CHUNKS table to prevent data duplication on retry. System now restarts ONLY failed partitions instead of re-exporting all data. Added ERROR_MESSAGE and EXPORT_TIMESTAMP columns for better error handling and monitoring. Prevents duplicate file creation when parallel tasks fail (e.g., 22 partitions with 16 threads, 3 failures no longer duplicates 19 successful exports).' || CHR(10) ||
'v2.5.0 (2026-01-26): Added recorddelimiter parameter with CRLF (CHR(13)||CHR(10)) for CSV exports to ensure Windows-compatible line endings. Improves cross-platform compatibility when CSV files are opened in Windows applications (Notepad, Excel).' || CHR(10) ||
'v2.4.0 (2026-01-11): Added pTemplateTableName parameter for per-column date format configuration. Implements dynamic query building with TO_CHAR for each date/timestamp column using FILE_MANAGER.GET_DATE_FORMAT. Supports 3-tier hierarchy: column-specific, template DEFAULT, global fallback. Eliminates single dateformat limitation of DBMS_CLOUD.EXPORT_DATA.' || CHR(10) ||
'v2.3.0 (2025-12-20): Added parallel partition processing using DBMS_PARALLEL_EXECUTE. New pParallelDegree parameter (1-16, default 1) for EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures. Each year/month partition processed in separate thread for improved performance.' || CHR(10) ||
'v2.2.0 (2025-12-19): DRY refactoring - extracted shared helper functions (sanitizeFilename, VALIDATE_TABLE_AND_COLUMNS, GET_PARTITIONS, EXPORT_SINGLE_PARTITION worker procedure). Reduced code duplication by ~400 lines. Prepared architecture for v2.3.0 parallel processing.' || CHR(10) ||
'v2.1.1 (2025-12-04): Fixed JOIN column reference A_WORKFLOW_HISTORY_KEY -> A_ETL_LOAD_SET_KEY, added consistent column mapping and dynamic column list to EXPORT_TABLE_DATA procedure, enhanced DEBUG logging for all export operations' || CHR(10) ||
'v2.1.0 (2025-10-22): Added version tracking and PARTITION_YEAR/PARTITION_MONTH support' || CHR(10) ||
'v2.0.0 (2025-10-01): Separated export functionality from FILE_MANAGER package' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into CSV file on OCI infrustructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports'
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17'
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

View File

@@ -324,7 +324,7 @@ AS
ERR_UNKNOWN EXCEPTION;
CODE_UNKNOWN CONSTANT PLS_INTEGER := -20999;
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occured';
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occurred';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN
,CODE_UNKNOWN);

View File

@@ -58,14 +58,15 @@ AS
BEGIN
vParameters := CT_MRDS.ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST('pSourceFileConfigKey => '||nvl(to_char(pSourceFileConfigKey),NULL)));
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start','DEBUG', vParameters);
SELECT count(*) , min(SOURCE_FILE_TYPE)
-- LEFT JOIN ensures SOURCE_FILE_TYPE is retrieved from config even when no stats exist yet
SELECT count(s.A_SOURCE_FILE_CONFIG_KEY), min(c.SOURCE_FILE_TYPE)
INTO vCount, vSourceFileType
FROM CT_MRDS.A_TABLE_STAT s
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG c
FROM CT_MRDS.A_SOURCE_FILE_CONFIG c
LEFT JOIN CT_MRDS.A_TABLE_STAT s
ON s.A_SOURCE_FILE_CONFIG_KEY = c.A_SOURCE_FILE_CONFIG_KEY
WHERE s.A_SOURCE_FILE_CONFIG_KEY = pSourceFileConfigKey;
WHERE c.A_SOURCE_FILE_CONFIG_KEY = pSourceFileConfigKey;
IF vCount=0 and vSourceFileType='INPUT' THEN
IF vCount = 0 AND vSourceFileType = 'INPUT' THEN
GATHER_TABLE_STAT(pSourceFileConfigKey);
END IF;
@@ -74,9 +75,13 @@ AS
INTO vTableStat
FROM CT_MRDS.A_TABLE_STAT
WHERE A_SOURCE_FILE_CONFIG_KEY = pSourceFileConfigKey;
-- EXCEPTION
-- WHEN NO_DATA_FOUND THEN
--
EXCEPTION
WHEN NO_DATA_FOUND THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'No statistics found in A_TABLE_STAT for config key ' || pSourceFileConfigKey
|| ' (SOURCE_FILE_TYPE=' || NVL(vSourceFileType, 'NULL') || '). Cannot proceed with archival.',
'ERROR', vParameters);
RAISE;
END;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','DEBUG',vParameters);
@@ -120,8 +125,8 @@ AS
END IF;
-- Get TRASH policy from configuration
vKeepInTrash := (vSourceFileConfig.IS_KEEP_IN_TRASH = 'Y');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('TRASH policy from config: IS_KEEP_IN_TRASH=' || vSourceFileConfig.IS_KEEP_IN_TRASH, 'INFO', vParameters);
vKeepInTrash := (vSourceFileConfig.IS_KEPT_IN_TRASH = 'Y');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('TRASH policy from config: IS_KEPT_IN_TRASH=' || vSourceFileConfig.IS_KEPT_IN_TRASH, 'INFO', vParameters);
vTableStat := GET_TABLE_STAT(pSourceFileConfigKey => pSourceFileConfigKey);
@@ -139,18 +144,18 @@ AS
IF vSourceFileConfig.ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS' THEN
-- MINIMUM_AGE_MONTHS: Archive based on age only, ignore thresholds
vArchivalTriggeredBy := 'AGE_BASED';
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival strategy: MINIMUM_AGE_MONTHS (threshold-independent)','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival strategy: MINIMUM_AGE_MONTHS (threshold-independent)','INFO', vParameters);
ELSE
-- THRESHOLD_BASED and HYBRID: Check thresholds
if vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_FILES_COUNT then vArchivalTriggeredBy := 'FILES_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
else CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO');
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_TOTAL_SIZE >= vSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
else CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO', vParameters);
end if;
END IF;
if LENGTH(vArchivalTriggeredBy)>0 THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival Triggered By: '||vArchivalTriggeredBy,'INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival Triggered By: '||vArchivalTriggeredBy,'INFO', vParameters);
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSourceFileConfig.ODS_SCHEMA_NAME) || '.'||DBMS_ASSERT.simple_sql_name(vSourceFileConfig.TABLE_ID)||'_ODS';
-- Use strategy-based WHERE clause (MARS-828)
@@ -166,12 +171,37 @@ AS
join CT_MRDS.a_workflow_history h
on s.a_workflow_history_key = h.a_workflow_history_key
where ' || GET_ARCHIVAL_WHERE_CLAUSE(vSourceFileConfig) || '
and h.WORKFLOW_SUCCESSFUL = ''Y''
' || CASE WHEN vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED = 'Y' THEN 'and h.WORKFLOW_SUCCESSFUL = ''Y''' ELSE '' END || '
group by file$name, file$path, to_char(h.workflow_start,''yyyy''), to_char(h.workflow_start,''mm'')'
;
-- Get all files that will be archived into "vfiles" collection ("regular data files")
execute immediate vQuery bulk collect into vfiles;
-- MARS-1468: Handle ORA-29913/ORA-12801 - no files in ODS bucket (empty external table location)
-- ORA-29913 may come directly or wrapped in ORA-12801 (parallel query) with KUP-05002 root cause
BEGIN
execute immediate vQuery bulk collect into vfiles;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE IN (-29913, -12801) AND DBMS_UTILITY.FORMAT_ERROR_STACK LIKE '%KUP-05002%' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('No files found in ODS bucket (empty location, SQLCODE=' || SQLCODE || '). Nothing to archive.', 'INFO', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
RETURN;
ELSE
RAISE;
END IF;
END;
-- Check if any files match archival criteria
IF vfiles.COUNT = 0 THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'No files matching archival criteria found (strategy: ' || vSourceFileConfig.ARCHIVAL_STRATEGY
|| ', IS_WORKFLOW_SUCCESS_REQUIRED: ' || vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED || '). Nothing to archive.',
'INFO', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
RETURN;
END IF;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Files matching archival criteria: ' || vfiles.COUNT, 'INFO', vParameters);
-- Start EXPORT "regular data files" to parquet and DROP "csv"
FOR ym_loop IN (select distinct year, month from table(vfiles) order by 1,2) LOOP
@@ -187,12 +217,12 @@ AS
on s.a_workflow_history_key = h.a_workflow_history_key
and to_char(h.workflow_start,''yyyy'') = '''||ym_loop.year||'''
and to_char(h.workflow_start,''mm'') = '''||ym_loop.month||'''
and h.WORKFLOW_SUCCESSFUL = ''Y''
'|| CASE WHEN vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED = 'Y' THEN 'and h.WORKFLOW_SUCCESSFUL = ''Y''' ELSE '' END ||'
'
;
vUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE')||'ARCHIVE/'||vSourceFileConfig.A_SOURCE_KEY||'/'||vSourceFileConfig.TABLE_ID||'/PARTITION_YEAR='||ym_loop.year||'/PARTITION_MONTH='||ym_loop.month||'/';
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start Archiving for YEAR_MONTH: '||ym_loop.year||'_'||ym_loop.month ,'INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start Archiving for YEAR_MONTH: '||ym_loop.year||'_'||ym_loop.month ,'INFO', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Parameter for DBMS_CLOUD.EXPORT_DATA => file_uri_list' ,'DEBUG',vUri);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Parameter for DBMS_CLOUD.EXPORT_DATA => query' ,'DEBUG',vQuery);
@@ -214,7 +244,7 @@ AS
END;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vOperationId of export: '||vOperationId,'DEBUG');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vOperationId of export: '||vOperationId,'DEBUG', vParameters);
-- Get USER_LOAD_OPERATIONS info
select *
@@ -267,10 +297,10 @@ AS
target_object_uri => replace(f.pathname,'ODS','TRASH')||'/'||f.filename,
target_credential_name => ENV_MANAGER.gvCredentialName
);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File moved to TRASH folder.','DEBUG', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File moved to TRASH folder: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to move file to TRASH folder.','ERROR', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to move file to TRASH folder: '||f.pathname||'/'||f.filename,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
rollback;
vProcessControlStatus := 'MOVE_FILE_TO_TRASH_FAILURE';
@@ -288,7 +318,7 @@ AS
FOR f in (select filename, pathname from table(vfiles) where year = ym_loop.year and month = ym_loop.month) LOOP
DBMS_CLOUD.DELETE_OBJECT(credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName,
object_uri => replace(f.pathname,'ODS','TRASH')||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File dropped from TRASH folder.','DEBUG', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File dropped from TRASH folder: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
-- Update status to ARCHIVED_AND_PURGED
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED r
@@ -297,10 +327,10 @@ AS
AND r.source_file_name = f.filename
AND r.PROCESSING_STATUS = 'ARCHIVED_AND_TRASHED';
END LOOP;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('All archived files removed from TRASH folder and marked as ARCHIVED_AND_PURGED (config: IS_KEEP_IN_TRASH=N).','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('All archived files removed from TRASH folder and marked as ARCHIVED_AND_PURGED (config: IS_KEPT_IN_TRASH=N).','INFO', vParameters);
ELSE
-- Keep files in TRASH folder (status remains ARCHIVED_AND_TRASHED)
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archived files kept in TRASH folder for retention (config: IS_KEEP_IN_TRASH=Y, status: ARCHIVED_AND_TRASHED).','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archived files kept in TRASH folder for retention (config: IS_KEPT_IN_TRASH=Y, status: ARCHIVED_AND_TRASHED).','INFO', vParameters);
END IF;
--ROLLBACK PART
@@ -321,7 +351,7 @@ AS
target_object_uri => f.pathname||'/'||f.filename,
target_credential_name => ENV_MANAGER.gvCredentialName
);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH folder.','DEBUG', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH folder: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED r
SET PROCESSING_STATUS = 'INGESTED'
@@ -332,7 +362,7 @@ AS
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH folder.','ERROR', replace(f.pathname,'ODS','TRASH')||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH folder: '||replace(f.pathname,'ODS','TRASH')||'/'||f.filename,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
vProcessControlStatus := 'RESTORE_FILE_FROM_TRASH_FAILURE';
END;
@@ -368,12 +398,12 @@ AS
credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName,
object_uri => vFilename || arch_file.object_name
);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival PARQUET file dropped.','DEBUG', vFilename || arch_file.object_name);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival PARQUET file dropped: '||vFilename || arch_file.object_name,'DEBUG', vParameters);
END LOOP;
RAISE_APPLICATION_ERROR(CT_MRDS.ENV_MANAGER.CODE_CHANGE_STAT_TO_ARCHIVED_FAILED, CT_MRDS.ENV_MANAGER.MSG_CHANGE_STAT_TO_ARCHIVED_FAILED);
ELSIF vProcessControlStatus = 'RESTORE_FILE_FROM_TRASH_FAILURE' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Some files were not restored from TRASH. Check A_PROCESS_LOG table for details','ERROR');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Some files were not restored from TRASH. Check A_PROCESS_LOG table for details','ERROR', vParameters);
RAISE_APPLICATION_ERROR(CT_MRDS.ENV_MANAGER.CODE_RESTORE_FILE_FROM_TRASH, CT_MRDS.ENV_MANAGER.MSG_RESTORE_FILE_FROM_TRASH);
END IF;
@@ -438,30 +468,52 @@ AS
vSourceFileConfig CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE;
vTableName VARCHAR2(200);
vQuery VARCHAR2(32000);
vWhereClause VARCHAR2(4000);
vOdsBucketUri VARCHAR2(1000);
vWhereClause VARCHAR2(4000);
vOverThresholdWhereClause VARCHAR2(4000);
vOdsBucketUri VARCHAR2(1000);
BEGIN
vParameters := CT_MRDS.ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST('pSourceFileConfigKey => '||nvl(to_char(pSourceFileConfigKey), 'NULL')));
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
vSourceFileConfig := CT_MRDS.FILE_MANAGER.GET_SOURCE_FILE_CONFIG(pSourceFileConfigKey => pSourceFileConfigKey);
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSourceFileConfig.ODS_SCHEMA_NAME) || '.'||DBMS_ASSERT.simple_sql_name(vSourceFileConfig.TABLE_ID)||'_ODS';
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vTableName','DEBUG',vTableName);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vTableName = '||vTableName, 'DEBUG', vParameters);
-- Get WHERE clause based on archival strategy (MARS-828)
vWhereClause := GET_ARCHIVAL_WHERE_CLAUSE(vSourceFileConfig);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vWhereClause','DEBUG',vWhereClause);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vWhereClause = '||vWhereClause, 'DEBUG', vParameters);
-- Get ODS bucket URI before building query
vOdsBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ODS') || 'ODS/' || vSourceFileConfig.A_SOURCE_KEY || '/' || vSourceFileConfig.TABLE_ID || '/';
-- Build WHERE clause for OVER_ARCH_THRESOLD columns:
-- Combines archival strategy time-condition with optional workflow success filter.
-- IS_WORKFLOW_SUCCESS_REQUIRED='Y': only files with WORKFLOW_SUCCESSFUL='Y' are counted as eligible.
-- IS_WORKFLOW_SUCCESS_REQUIRED='N': all files passing the time-condition are counted as eligible.
IF vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED = 'Y' THEN
vOverThresholdWhereClause := vWhereClause || ' AND workflow_successful = ''Y''';
ELSE
vOverThresholdWhereClause := vWhereClause;
END IF;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vOverThresholdWhereClause = '||vOverThresholdWhereClause, 'DEBUG', vParameters);
-- Use strategy-based WHERE clause for statistics (MARS-828)
-- FILE_COUNT, ROW_COUNT, TOTAL_SIZE: all files regardless of workflow success (never zero due to workflow filter)
-- OVER_ARCH_THRESOLD_*: IS_WORKFLOW_SUCCESS_REQUIRED-aware count of eligible files
-- WORKFLOW_SUCCESS_*: informational count of files with WORKFLOW_SUCCESSFUL='Y'
-- Column order MUST match A_TABLE_STAT column definition order for positional INTO vStats to work:
-- 1:A_TABLE_STAT_KEY, 2:A_SOURCE_FILE_CONFIG_KEY, 3:TABLE_NAME, 4:CREATED, 5:ARCHIVAL_STRATEGY,
-- 6:ARCH_MINIMUM_AGE_MONTHS, 7:ARCH_THRESHOLD_DAYS, 8:IS_WORKFLOW_SUCCESS_REQUIRED,
-- 9:FILE_COUNT, 10:ROW_COUNT, 11:TOTAL_SIZE,
-- 12:OVER_ARCH_THRESOLD_FILE_COUNT, 13:OVER_ARCH_THRESOLD_ROW_COUNT, 14:OVER_ARCH_THRESOLD_TOTAL_SIZE,
-- 15:WORKFLOW_SUCCESS_FILE_COUNT, 16:WORKFLOW_SUCCESS_ROW_COUNT, 17:WORKFLOW_SUCCESS_TOTAL_SIZE
vQuery :=
'with tmp as (
select
s.*
,file$name as filename
,h.workflow_start
,h.workflow_successful
, to_char(h.workflow_start,''yyyy'') as year
, to_char(h.workflow_start,''mm'') as month
from '||vTableName||' s
@@ -470,22 +522,31 @@ AS
)
, tmp_gr as (
select
filename, count(*) as row_count_per_file, min(workflow_start) as workflow_start
filename
,count(*) as row_count_per_file
,min(workflow_start) as workflow_start
,max(workflow_successful) as workflow_successful
from tmp
group by filename
)
select
NULL as A_TABLE_STAT_KEY
,'||pSourceFileConfigKey||' as A_SOURCE_FILE_CONFIG_KEY
,'''||vTableName||''' as TABLE_NAME
,count(*) as FILE_COUNT
,sum(case when ' || vWhereClause || ' then 1 else 0 end) as OLD_FILE_COUNT
,sum (row_count_per_file) as ROW_COUNT
,sum(case when ' || vWhereClause || ' then row_count_per_file else 0 end) as OLD_ROW_COUNT
,sum(r.bytes) as BYTES
,sum(case when ' || vWhereClause || ' then r.bytes else 0 end) as OLD_BYTES
,'||COALESCE(TO_CHAR(vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS), 'NULL')||' as ARCHIVE_THRESHOLD_DAYS
,systimestamp as CREATED
NULL as A_TABLE_STAT_KEY
,'||pSourceFileConfigKey||' as A_SOURCE_FILE_CONFIG_KEY
,'''||vTableName||''' as TABLE_NAME
,systimestamp as CREATED
,'''||vSourceFileConfig.ARCHIVAL_STRATEGY||''' as ARCHIVAL_STRATEGY
,'||COALESCE(TO_CHAR(vSourceFileConfig.MINIMUM_AGE_MONTHS), 'NULL')||' as ARCH_MINIMUM_AGE_MONTHS
,'||COALESCE(TO_CHAR(vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS), 'NULL')||' as ARCH_THRESHOLD_DAYS
,'''||vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED||''' as IS_WORKFLOW_SUCCESS_REQUIRED
,count(*) as FILE_COUNT
,nvl(sum(row_count_per_file), 0) as ROW_COUNT
,nvl(sum(r.bytes), 0) as TOTAL_SIZE
,nvl(sum(case when ' || vOverThresholdWhereClause || ' then 1 else 0 end), 0) as OVER_ARCH_THRESOLD_FILE_COUNT
,nvl(sum(case when ' || vOverThresholdWhereClause || ' then row_count_per_file else 0 end), 0) as OVER_ARCH_THRESOLD_ROW_COUNT
,nvl(sum(case when ' || vOverThresholdWhereClause || ' then r.bytes else 0 end), 0) as OVER_ARCH_THRESOLD_TOTAL_SIZE
,nvl(sum(case when workflow_successful = ''Y'' then 1 else 0 end), 0) as WORKFLOW_SUCCESS_FILE_COUNT
,nvl(sum(case when workflow_successful = ''Y'' then row_count_per_file else 0 end), 0) as WORKFLOW_SUCCESS_ROW_COUNT
,nvl(sum(case when workflow_successful = ''Y'' then r.bytes else 0 end), 0) as WORKFLOW_SUCCESS_TOTAL_SIZE
from tmp_gr t
join (SELECT * from DBMS_CLOUD.LIST_OBJECTS(
credential_name => '''||CT_MRDS.ENV_MANAGER.gvCredentialName||''',
@@ -494,8 +555,35 @@ AS
) r
on t.filename = r.object_name'
;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vQuery','DEBUG',vQuery);
execute immediate vQuery into vStats;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vQuery:', 'DEBUG', vQuery);
-- MARS-1468: Handle ORA-29913/ORA-12801 - no files in ODS bucket (empty external table location)
-- ORA-29913 may come directly or wrapped in ORA-12801 (parallel query) with KUP-05002 root cause
BEGIN
execute immediate vQuery into vStats;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE IN (-29913, -12801) AND DBMS_UTILITY.FORMAT_ERROR_STACK LIKE '%KUP-05002%' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('No files found in ODS bucket (empty location, SQLCODE=' || SQLCODE || '). Saving zero statistics.', 'INFO', vParameters);
vStats.A_SOURCE_FILE_CONFIG_KEY := pSourceFileConfigKey;
vStats.TABLE_NAME := vTableName;
vStats.FILE_COUNT := 0;
vStats.OVER_ARCH_THRESOLD_FILE_COUNT := 0;
vStats.ROW_COUNT := 0;
vStats.OVER_ARCH_THRESOLD_ROW_COUNT := 0;
vStats.TOTAL_SIZE := 0;
vStats.OVER_ARCH_THRESOLD_TOTAL_SIZE := 0;
vStats.ARCH_THRESHOLD_DAYS := vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS;
vStats.CREATED := SYSTIMESTAMP;
vStats.ARCHIVAL_STRATEGY := vSourceFileConfig.ARCHIVAL_STRATEGY;
vStats.ARCH_MINIMUM_AGE_MONTHS := vSourceFileConfig.MINIMUM_AGE_MONTHS;
vStats.IS_WORKFLOW_SUCCESS_REQUIRED := vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED;
vStats.WORKFLOW_SUCCESS_FILE_COUNT := 0;
vStats.WORKFLOW_SUCCESS_ROW_COUNT := 0;
vStats.WORKFLOW_SUCCESS_TOTAL_SIZE := 0;
ELSE
RAISE;
END IF;
END;
vStats.A_TABLE_STAT_KEY := CT_MRDS.A_TABLE_STAT_KEY_SEQ.NEXTVAL;
insert into CT_MRDS.A_TABLE_STAT_HIST values vStats;
@@ -611,10 +699,10 @@ AS
target_credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName
);
vFilesRestored := vFilesRestored + 1;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH','DEBUG', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH: '||file_rec.SOURCE_FILE_NAME,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH','ERROR', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH: '||file_rec.SOURCE_FILE_NAME,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
END;
END LOOP;
@@ -651,10 +739,10 @@ AS
target_credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName
);
vFilesRestored := vFilesRestored + 1;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH','DEBUG', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH: '||file_rec.SOURCE_FILE_NAME,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH','ERROR', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH: '||file_rec.SOURCE_FILE_NAME,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
END;
END LOOP;
@@ -1043,7 +1131,7 @@ AS
A_SOURCE_FILE_CONFIG_KEY,
TABLE_ID,
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH,
IS_KEPT_IN_TRASH,
A_SOURCE_KEY
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
@@ -1068,7 +1156,7 @@ AS
ELSE
BEGIN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'Archiving table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', IS_KEEP_IN_TRASH=' || config_rec.IS_KEEP_IN_TRASH || ']',
'Archiving table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', IS_KEPT_IN_TRASH=' || config_rec.IS_KEPT_IN_TRASH || ']',
'INFO'
);

View File

@@ -17,13 +17,17 @@ AS
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.3.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-11 12:00:00';
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.4.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-17 11:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEEP_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.4.0 (2026-03-17): MARS-1409 - Added IS_WORKFLOW_SUCCESS_REQUIRED flag to A_SOURCE_FILE_CONFIG (DEFAULT Y). ' ||
'Y=standard DBT flow (WORKFLOW_SUCCESSFUL=Y required), N=bypass for manual/non-DBT sources. ' ||
'Flag value stored in A_TABLE_STAT and A_TABLE_STAT_HIST for full audit of statistics basis.' || CHR(13)||CHR(10) ||
'3.3.1 (2026-03-13): Fixed ORA-29913 handling in ARCHIVE_TABLE_DATA (graceful RETURN when ODS bucket is empty) and GATHER_TABLE_STAT (saves zero statistics instead of raising error)' || CHR(13)||CHR(10) ||
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEPT_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.2.1 (2026-02-10): Fixed status update - ARCHIVED → ARCHIVED_AND_TRASHED when moving files to TRASH folder (critical bug fix)' || CHR(13)||CHR(10) ||
'3.2.0 (2026-02-06): Added pKeepInTrash parameter (DEFAULT TRUE) to ARCHIVE_TABLE_DATA for TRASH folder retention control - files kept in TRASH subfolder (DATA bucket) by default for safety and compliance' || CHR(13)||CHR(10) ||
'3.1.2 (2026-02-06): Fixed missing PARTITION_YEAR/PARTITION_MONTH assignments in UPDATE statement and export query circular dependency (now filters by workflow_start instead of partition fields)' || CHR(13)||CHR(10) ||
@@ -51,7 +55,7 @@ AS
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data from table specified by pSourceFileConfigKey(A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY) into PARQUET file on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
**/
PROCEDURE ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
@@ -62,7 +66,7 @@ AS
* @desc Function wrapper for ARCHIVE_TABLE_DATA procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_TABLE_DATA procedure and captures execution result.
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_TABLE_DATA(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
@@ -137,7 +141,7 @@ AS
* @name ARCHIVE_ALL
* @desc Multi-level batch archival procedure with three granularity levels.
* Only processes configurations where IS_ARCHIVE_ENABLED='Y'.
* TRASH policy for each table is controlled by individual IS_KEEP_IN_TRASH column.
* TRASH policy for each table is controlled by individual IS_KEPT_IN_TRASH column.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (e.g., 'LM', 'C2D') (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)

View File

@@ -304,7 +304,6 @@ AS
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_FILE_ALREADY_REGISTERED, vgMsgTmp);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -331,7 +330,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -549,7 +547,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MISSING_COLUMN_DATE_FORMAT, ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -912,7 +909,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END DROP_EXTERNAL_TABLE;
@@ -953,7 +949,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
END COPY_FILE;
@@ -1012,7 +1007,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_NO_CONFIG_FOR_RECEIVED_FILE, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_NO_CONFIG_FOR_RECEIVED_FILE, ENV_MANAGER.MSG_NO_CONFIG_FOR_RECEIVED_FILE);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
END MOVE_FILE;
@@ -1226,7 +1220,6 @@ AS
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -1360,8 +1353,19 @@ AS
rec.quoted_column_name || ' VARCHAR2(' || rec.data_length || ')'
END
-- Other character types (preserve original logic)
WHEN rec.data_type IN ('CHAR', 'NCHAR', 'NVARCHAR2') THEN
rec.quoted_column_name || ' ' || rec.data_type || '(' || rec.data_length || ')'
-- MARS-1468: Fixed CHAR to use char_used/char_length (same as VARCHAR2 fix in MARS-1056)
WHEN rec.data_type = 'CHAR' THEN
CASE
WHEN rec.char_used = 'C' THEN
rec.quoted_column_name || ' CHAR(' || rec.char_length || ' CHAR)'
WHEN rec.char_used = 'B' THEN
rec.quoted_column_name || ' CHAR(' || rec.data_length || ' BYTE)'
ELSE
rec.quoted_column_name || ' CHAR(' || rec.data_length || ')'
END
-- MARS-1468: NCHAR/NVARCHAR2 - use char_length (data_length stores bytes in AL16UTF16, e.g. NCHAR(1) => data_length=2 but char_length=1)
WHEN rec.data_type IN ('NCHAR', 'NVARCHAR2') THEN
rec.quoted_column_name || ' ' || rec.data_type || '(' || rec.char_length || ')'
WHEN rec.data_type = 'NUMBER' THEN
rec.quoted_column_name || ' ' || rec.data_type ||
CASE
@@ -1396,8 +1400,13 @@ AS
-- Other TIMESTAMP types (without timezone)
-- SQL*Loader syntax: CHAR(length) DATE_FORMAT TIMESTAMP MASK "format" (not: TIMESTAMP 'format')
rec.quoted_column_name || ' CHAR(35) DATE_FORMAT TIMESTAMP MASK ' || CHR(39) || NORMALIZE_DATE_FORMAT(GET_DATE_FORMAT(pTemplateTableName => pTemplateTableName, pColumnName => rec.column_name)) || CHR(39)
WHEN rec.data_type IN ('CHAR', 'NCHAR', 'VARCHAR2', 'NVARCHAR2') THEN
-- For CSV field definitions, use data_length for CHAR() specification
WHEN rec.data_type IN ('VARCHAR2', 'CHAR') THEN
-- MARS-1468: For CHAR use char_length when char semantics (C), otherwise data_length
rec.quoted_column_name || ' CHAR(' ||
CASE WHEN rec.char_used = 'C' THEN rec.char_length ELSE rec.data_length END
|| ')'
WHEN rec.data_type IN ('NCHAR', 'NVARCHAR2') THEN
-- For CSV field definitions, use data_length for NCHAR/NVARCHAR2
rec.quoted_column_name || ' CHAR(' || rec.data_length || ')'
ELSE
rec.quoted_column_name
@@ -1426,7 +1435,6 @@ AS
-- ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT, 'ERROR', vParameters);
-- RAISE_ERROR(ENV_MANAGER.CODE_MISSING_COLUMN_DATE_FORMAT);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END GENERATE_EXTERNAL_TABLE_PARAMS;
@@ -1446,7 +1454,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_DUPLICATED_SOURCE_KEY, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_DUPLICATED_SOURCE_KEY, ENV_MANAGER.MSG_DUPLICATED_SOURCE_KEY);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END ADD_SOURCE;
@@ -1533,7 +1540,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END DELETE_SOURCE_CASCADE;
@@ -1567,7 +1573,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MULTIPLE_CONTAINER_ENTRIES, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MULTIPLE_CONTAINER_ENTRIES, ENV_MANAGER.MSG_MULTIPLE_CONTAINER_ENTRIES);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -1610,7 +1615,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MULTIPLE_MATCH_FOR_SRCFILE, vgMsgTmp);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -1627,7 +1631,12 @@ AS
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049
,pIsWorkflowSuccessRequired IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED%TYPE DEFAULT 'Y' -- MARS-1409
,pIsArchiveEnabled IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED%TYPE DEFAULT 'N' -- MARS-828
,pIsKeptInTrash IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH%TYPE DEFAULT 'Y' -- MARS-828
,pArchivalStrategy IN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY%TYPE DEFAULT 'THRESHOLD_BASED' -- MARS-828
,pMinimumAgeMonths IN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS%TYPE DEFAULT 0 -- MARS-828
) IS
vSourceFileConfigKey PLS_INTEGER;
vSourceKeyExists PLS_INTEGER := 0;
@@ -1643,10 +1652,15 @@ AS
,'pTemplateTableName => '''||nvl(to_char(pTemplateTableName), 'NULL')||''''
,'pContainerFileKey => '''||nvl(to_char(pContainerFileKey), 'NULL')||''''
,'pEncoding => '''||nvl(to_char(pEncoding), 'NULL')||'''' -- MARS-1049: NOWY
,'pIsWorkflowSuccessRequired => '''||nvl(to_char(pIsWorkflowSuccessRequired), 'NULL')||'''' -- MARS-1409
,'pIsArchiveEnabled => '''||nvl(to_char(pIsArchiveEnabled), 'NULL')||'''' -- MARS-828
,'pIsKeptInTrash => '''||nvl(to_char(pIsKeptInTrash), 'NULL')||'''' -- MARS-828
,'pArchivalStrategy => '''||nvl(to_char(pArchivalStrategy), 'NULL')||'''' -- MARS-828
,'pMinimumAgeMonths => '''||nvl(to_char(pMinimumAgeMonths), 'NULL')||'''' -- MARS-828
));
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
INSERT INTO CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_KEY, SOURCE_FILE_TYPE, SOURCE_FILE_ID, SOURCE_FILE_DESC, SOURCE_FILE_NAME_PATTERN, TABLE_ID, TEMPLATE_TABLE_NAME, CONTAINER_FILE_KEY, ENCODING)
VALUES (pSourceKey, pSourceFileType, pSourceFileId, pSourceFileDesc, pSourceFileNamePattern, pTableId, pTemplateTableName, pContainerFileKey, pEncoding);
INSERT INTO CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_KEY, SOURCE_FILE_TYPE, SOURCE_FILE_ID, SOURCE_FILE_DESC, SOURCE_FILE_NAME_PATTERN, TABLE_ID, TEMPLATE_TABLE_NAME, CONTAINER_FILE_KEY, ENCODING, IS_WORKFLOW_SUCCESS_REQUIRED, IS_ARCHIVE_ENABLED, IS_KEPT_IN_TRASH, ARCHIVAL_STRATEGY, MINIMUM_AGE_MONTHS)
VALUES (pSourceKey, pSourceFileType, pSourceFileId, pSourceFileDesc, pSourceFileNamePattern, pTableId, pTemplateTableName, pContainerFileKey, pEncoding, pIsWorkflowSuccessRequired, pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths);
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
EXCEPTION
@@ -1659,7 +1673,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_SOURCE_KEY, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MISSING_SOURCE_KEY, ENV_MANAGER.MSG_MISSING_SOURCE_KEY);
ELSE
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END IF;
@@ -1688,7 +1701,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT('End','DEBUG',vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END ADD_COLUMN_DATE_FORMAT;
@@ -1756,6 +1768,12 @@ AS
||cgBL||pLevel||'ARCHIVE_THRESHOLD_BYTES_SUM = '||pSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM
||cgBL||pLevel||'ARCHIVE_THRESHOLD_ROWS_COUNT = '||pSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT
||cgBL||pLevel||'HOURS_TO_EXPIRE_STATISTICS = '||pSourceFileConfig.HOURS_TO_EXPIRE_STATISTICS
||cgBL||pLevel||'ENCODING = '||pSourceFileConfig.ENCODING
||cgBL||pLevel||'IS_ARCHIVE_ENABLED = '||pSourceFileConfig.IS_ARCHIVE_ENABLED
||cgBL||pLevel||'IS_KEPT_IN_TRASH = '||pSourceFileConfig.IS_KEPT_IN_TRASH
||cgBL||pLevel||'ARCHIVAL_STRATEGY = '||pSourceFileConfig.ARCHIVAL_STRATEGY
||cgBL||pLevel||'MINIMUM_AGE_MONTHS = '||pSourceFileConfig.MINIMUM_AGE_MONTHS
||cgBL||pLevel||'IS_WORKFLOW_SUCCESS_REQUIRED = '||pSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED
||cgBL||pLevel||''||'--------------------------------'
;

View File

@@ -17,12 +17,15 @@ AS
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.6.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-27 09:00:00';
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.6.3';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-17 12:30:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.6.3 (2026-03-17): MARS-828 - Added pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths to ADD_SOURCE_FILE_CONFIG; FORMAT_CONFIG now shows all A_SOURCE_FILE_CONFIG columns' || CHR(13)||CHR(10) ||
'3.6.2 (2026-03-17): MARS-1409 - Added pIsWorkflowSuccessRequired parameter to ADD_SOURCE_FILE_CONFIG; IS_WORKFLOW_SUCCESS_REQUIRED shown in GET_DET_SOURCE_FILE_CONFIG_INFO output' || CHR(13)||CHR(10) ||
'3.6.1 (2026-03-13): MARS-1468 - Fixed CHAR/NCHAR/NVARCHAR2 column definitions in GENERATE_EXTERNAL_TABLE_PARAMS: CHAR now uses char_used/char_length semantics; NCHAR/NVARCHAR2 use char_length (data_length stores bytes in AL16UTF16)' || CHR(13)||CHR(10) ||
'3.6.0 (2026-02-27): MARS-1409 - Added A_WORKFLOW_HISTORY_KEY tracking in A_SOURCE_FILE_RECEIVED. Each file now stores its workflow execution key extracted during VALIDATE_SOURCE_FILE_RECEIVED' || CHR(13)||CHR(10) ||
'3.5.1 (2026-02-24): Fixed TIMESTAMP field syntax in GENERATE_EXTERNAL_TABLE_PARAMS for SQL*Loader compatibility (CHAR(35) DATE_FORMAT TIMESTAMP MASK format)' || CHR(13)||CHR(10) ||
'3.3.2 (2026-02-20): MARS-828 - Fixed threshold column names in GET_DET_SOURCE_FILE_CONFIG_INFO for MARS-828 compatibility' || CHR(13)||CHR(10) ||
@@ -442,13 +445,23 @@ AS
* @name ADD_SOURCE_FILE_CONFIG
* @desc Insert a new record to A_SOURCE_FILE_CONFIG table.
* MARS-1049: Added pEncoding parameter for CSV character set specification.
* MARS-1409: Added pIsWorkflowSuccessRequired parameter.
* MARS-828: Added pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths.
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252', 'EE8ISO8859P2')
* If NULL, no CHARACTERSET clause is added to external table definitions
* @param pIsWorkflowSuccessRequired - 'Y' (default) = archivization requires WORKFLOW_SUCCESSFUL='Y' (standard DBT flow)
* 'N' = archive regardless of workflow status (bypass for manual/non-DBT sources)
* @param pIsArchiveEnabled - 'Y' = enable automatic archivization for this config; 'N' (default) = disabled
* @param pIsKeptInTrash - 'Y' = move files to trash before purge; 'N' (default) = purge directly
* @param pArchivalStrategy - Archival strategy: 'MINIMUM_AGE_MONTHS' or NULL
* @param pMinimumAgeMonths - Minimum age in months before file eligible for archivization (used with MINIMUM_AGE_MONTHS strategy)
* @example CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
* pSourceKey => 'C2D', pSourceFileType => 'INPUT',
* pSourceFileId => 'UC_DISSEM', pTableId => 'METADATA_LOADS',
* pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
* pEncoding => 'UTF8'
* pEncoding => 'UTF8', pIsWorkflowSuccessRequired => 'Y',
* pIsArchiveEnabled => 'Y', pIsKeptInTrash => 'N',
* pArchivalStrategy => 'MINIMUM_AGE_MONTHS', pMinimumAgeMonths => 3
* );
**/
PROCEDURE ADD_SOURCE_FILE_CONFIG (
@@ -460,7 +473,12 @@ AS
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049
,pIsWorkflowSuccessRequired IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED%TYPE DEFAULT 'Y' -- MARS-1409
,pIsArchiveEnabled IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED%TYPE DEFAULT 'N' -- MARS-828
,pIsKeptInTrash IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH%TYPE DEFAULT 'Y' -- MARS-828
,pArchivalStrategy IN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY%TYPE DEFAULT 'THRESHOLD_BASED' -- MARS-828
,pMinimumAgeMonths IN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS%TYPE DEFAULT 0 -- MARS-828
);

View File

@@ -1,52 +0,0 @@
-- ====================================================================
-- TRG_A_WORKFLOW_HISTORY Trigger Definition
-- ====================================================================
-- Purpose: Trigger to:
-- 1. Insert workflow completion data to CT_ODS.A_LOAD_HISTORY
-- 2. MARS-1409: Mark linked A_SOURCE_FILE_RECEIVED records as INGESTED
-- ====================================================================
CREATE OR REPLACE EDITIONABLE TRIGGER "CT_MRDS"."TRG_A_WORKFLOW_HISTORY"
AFTER INSERT OR UPDATE OF workflow_successful ON CT_MRDS.A_WORKFLOW_HISTORY
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
-- Original logic: Insert into CT_ODS.A_LOAD_HISTORY for specific ODS workflows
IF :new.workflow_name IN ('w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL') AND :new.service_name = 'ODS' THEN
IF :new.workflow_successful <> :old.workflow_successful AND :new.workflow_successful = 'Y' THEN
IF :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN
v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
ELSE
v_workflow_name := :new.workflow_name;
END IF;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
INSERT INTO CT_ODS.A_LOAD_HISTORY (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end,
exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end,
NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
-- MARS-1409: When workflow completes successfully, mark linked files as INGESTED
IF :new.workflow_successful = 'Y' THEN
IF INSERTING OR (UPDATING AND (:old.workflow_successful IS NULL OR :old.workflow_successful != 'Y')) THEN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED
SET PROCESSING_STATUS = 'INGESTED',
PROCESS_NAME = :new.service_name
WHERE A_WORKFLOW_HISTORY_KEY = :new.a_workflow_history_key;
END IF;
END IF;
END;
/

View File

@@ -32,7 +32,7 @@ PROMPT MARS-1409 Rollback Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_MANAGER
PROMPT Change: Remove A_WORKFLOW_HISTORY_KEY column and restore previous version
PROMPT Steps: 10 (Restore FILE_ARCHIVER, Restore FILE_MANAGER, Restore ENV_MANAGER, Restore trigger, Clear data, Drop column, Verify)
PROMPT Steps: 13 (Drop tables/columns first, then Restore ENV_MANAGER, FILE_MANAGER, DATA_EXPORTER, FILE_ARCHIVER (dependency order), Restore trigger, Verify)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_start FROM DUAL;
PROMPT ============================================================================
@@ -49,64 +49,83 @@ END;
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Restore FILE_ARCHIVER package specification (previous version)
PROMPT STEP 1: Drop A_TABLE_STAT, A_TABLE_STAT_HIST and IS_WORKFLOW_SUCCESS_REQUIRED column
PROMPT (must be done BEFORE compiling rollback packages so column names match)
PROMPT ============================================================================
@@91_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_SPEC.sql
@@98_MARS_1409_rollback_archival_strategy_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Restore FILE_ARCHIVER package body (previous version)
PROMPT ============================================================================
@@92_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 3: Restore FILE_MANAGER package specification (previous version)
PROMPT ============================================================================
@@93_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Restore FILE_MANAGER package body (previous version)
PROMPT ============================================================================
@@94_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Restore ENV_MANAGER package specification (previous version)
PROMPT ============================================================================
@@95_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Restore ENV_MANAGER package body (previous version)
PROMPT ============================================================================
@@96_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Restore TRG_A_WORKFLOW_HISTORY trigger (previous version)
PROMPT ============================================================================
@@97_MARS_1409_rollback_CT_MRDS_TRG_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Clear A_WORKFLOW_HISTORY_KEY values from existing records
PROMPT ============================================================================
@@98_MARS_1409_rollback_existing_workflow_keys.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Drop A_WORKFLOW_HISTORY_KEY column
PROMPT STEP 2: Drop A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
PROMPT ============================================================================
@@99_MARS_1409_rollback_workflow_history_key_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Verify rollback
PROMPT STEP 3: Restore ENV_MANAGER package specification (previous version)
PROMPT ============================================================================
@@95_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Restore ENV_MANAGER package body (previous version)
PROMPT ============================================================================
@@96_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Restore FILE_MANAGER package specification (previous version)
PROMPT ============================================================================
@@93_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Restore FILE_MANAGER package body (previous version)
PROMPT ============================================================================
@@94_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Restore DATA_EXPORTER package specification (previous version)
PROMPT ============================================================================
@@83_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Restore DATA_EXPORTER package body (previous version)
PROMPT ============================================================================
@@84_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Restore FILE_ARCHIVER package specification (previous version)
PROMPT ============================================================================
@@91_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Restore FILE_ARCHIVER package body (previous version)
PROMPT ============================================================================
@@92_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Restore A_WORKFLOW_HISTORY trigger (previous version)
PROMPT ============================================================================
@@97_MARS_1409_rollback_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 12: Verify rollback
PROMPT ============================================================================
@@90_MARS_1409_verify_rollback.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 13: Verify package versions
PROMPT ============================================================================
@@verify_packages_version.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409 Rollback Complete
@@ -118,3 +137,5 @@ PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,101 @@
-- ====================================================================
-- A_SOURCE_FILE_CONFIG Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- NOTE: IS_WORKFLOW_SUCCESS_REQUIRED column NOT included (added by MARS-1409)
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_KEY VARCHAR2(30) NOT NULL ENABLE,
SOURCE_FILE_TYPE VARCHAR2(200), -- Can be 'INPUT' or 'CONTAINER' or 'LOAD_CONFIG'
SOURCE_FILE_ID VARCHAR2(200),
SOURCE_FILE_DESC VARCHAR2(2000),
SOURCE_FILE_NAME_PATTERN VARCHAR2(200),
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEEP_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEEP_IN_TRASH CHECK (IS_KEEP_IN_TRASH IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_CONFIG_UQ1 UNIQUE(SOURCE_FILE_TYPE, SOURCE_FILE_ID, TABLE_ID)
) TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,26 @@
-- ====================================================================
-- A_TABLE_STAT Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store current table statistics and archival thresholds
-- NOTE: This is the pre-MARS-1409 structure without:
-- ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED,
-- WORKFLOW_SUCCESS_FILE_COUNT, WORKFLOW_SUCCESS_ROW_COUNT, WORKFLOW_SUCCESS_TOTAL_SIZE
-- Column names: SIZE (not TOTAL_SIZE), OVER_ARCH_THRESOLD_SIZE (not OVER_ARCH_THRESOLD_TOTAL_SIZE)
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT (
A_TABLE_STAT_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCH_THRESHOLD_DAYS NUMBER(4,0),
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
"SIZE" NUMBER(38,0),
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
-- Note: A_TABLE_STAT_UK1 index is auto-created by the UNIQUE constraint definition above.

View File

@@ -0,0 +1,23 @@
-- ====================================================================
-- A_TABLE_STAT_HIST Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store historical table statistics for trend analysis
-- NOTE: This is the pre-MARS-1409 structure without:
-- ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED,
-- WORKFLOW_SUCCESS_FILE_COUNT, WORKFLOW_SUCCESS_ROW_COUNT, WORKFLOW_SUCCESS_TOTAL_SIZE
-- Column names: SIZE (not TOTAL_SIZE), OVER_ARCH_THRESOLD_SIZE (not OVER_ARCH_THRESOLD_TOTAL_SIZE)
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
A_TABLE_STAT_HIST_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCH_THRESHOLD_DAYS NUMBER(4,0),
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
"SIZE" NUMBER(38,0),
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0)
) TABLESPACE "DATA";

View File

@@ -0,0 +1,48 @@
WHENEVER SQLERROR CONTINUE
GRANT SELECT, INSERT, UPDATE, DELETE ON ct_ods.a_load_history TO ct_mrds;
WHENEVER SQLERROR EXIT SQL.SQLCODE
create or replace TRIGGER ct_mrds.a_workflow_history
AFTER INSERT OR UPDATE OF workflow_successful ON ct_mrds.a_workflow_history
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
IF :new.service_name = 'ODS' AND :new.workflow_name IN (
'w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL',
'w_ODS_TMS_LIMIT_ACCESS', 'w_ODS_TMS_PORTFOLIO_ACCESS', 'w_ODS_TMS_PORTFOLIO_TREE',
'w_ODS_TMS_COLLATERAL_INVENTORY', 'w_ODS_TOP_FULLBIDARRAY_COMPILED', 'w_ODS_TOP_ANNOUNCEMENT',
'w_ODS_TOP_ALLOTMENT_MODIFICATIONS', 'w_ODS_TOP_ALLOTMENT', 'w_ODS_CEPH_PRICING', 'w_ODS_C2D_MPEC'
) THEN
IF :new.workflow_successful = 'Y' AND :new.workflow_successful <> NVL(:old.workflow_successful, 'N') THEN
CASE
WHEN :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
WHEN :new.workflow_name = 'w_ODS_TMS_LIMIT_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_LIMITACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_TREE' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOTREE';
WHEN :new.workflow_name = 'w_ODS_TMS_COLLATERAL_INVENTORY' THEN v_workflow_name := 'w_ODS_TMS_RAR_RARCOLLATERALINVENTORY';
WHEN :new.workflow_name = 'w_ODS_TOP_FULLBIDARRAY_COMPILED' THEN v_workflow_name := 'w_ODS_TOP_FULLBIDARRAY_COMPILED';
WHEN :new.workflow_name = 'w_ODS_TOP_ANNOUNCEMENT' THEN v_workflow_name := 'w_ODS_TOP_ANNOUNCEMENT';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT';
WHEN :new.workflow_name = 'w_ODS_CEPH_PRICING' THEN v_workflow_name := 'w_ODS_CEPH_PRICING';
WHEN :new.workflow_name = 'w_ODS_C2D_MPEC' THEN v_workflow_name := 'w_ODS_C2D_MPEC';
ELSE
v_workflow_name := :new.workflow_name;
END CASE;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION WHEN OTHERS THEN NULL;
END;
INSERT INTO ct_ods.a_load_history (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end, exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end, NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(19) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(50) := 'MRDS Development Team';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.6.3 (2026-01-28): COMPILATION FIX - Resolved ORA-00904 error in EXPORT_PARTITION_PARALLEL. SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE cannot be used directly in SQL UPDATE statements. Now properly assigned to vgMsgTmp variable before UPDATE.' || CHR(10) ||
'v2.6.2 (2026-01-28): CRITICAL FIX - Race condition when multiple exports run simultaneously. Changed DELETE to filter by age (>24h) instead of deleting all COMPLETED chunks. Prevents concurrent sessions from deleting each other chunks. Session-safe cleanup with TASK_NAME filtering. Enables true parallel execution of multiple export jobs.' || CHR(10) ||
'v2.6.1 (2026-01-28): Added DELETE_FAILED_EXPORT_FILE procedure to clean up partial/corrupted files before retry. When partition fails mid-export, partial file is deleted before retry to prevent Oracle from creating _1 suffixed duplicates. Ensures clean retry without orphaned files in OCI bucket.' || CHR(10) ||
'v2.6.0 (2026-01-28): CRITICAL FIX - Added STATUS tracking to A_PARALLEL_EXPORT_CHUNKS table to prevent data duplication on retry. System now restarts ONLY failed partitions instead of re-exporting all data. Added ERROR_MESSAGE and EXPORT_TIMESTAMP columns for better error handling and monitoring. Prevents duplicate file creation when parallel tasks fail (e.g., 22 partitions with 16 threads, 3 failures no longer duplicates 19 successful exports).' || CHR(10) ||
'v2.5.0 (2026-01-26): Added recorddelimiter parameter with CRLF (CHR(13)||CHR(10)) for CSV exports to ensure Windows-compatible line endings. Improves cross-platform compatibility when CSV files are opened in Windows applications (Notepad, Excel).' || CHR(10) ||
'v2.4.0 (2026-01-11): Added pTemplateTableName parameter for per-column date format configuration. Implements dynamic query building with TO_CHAR for each date/timestamp column using FILE_MANAGER.GET_DATE_FORMAT. Supports 3-tier hierarchy: column-specific, template DEFAULT, global fallback. Eliminates single dateformat limitation of DBMS_CLOUD.EXPORT_DATA.' || CHR(10) ||
'v2.3.0 (2025-12-20): Added parallel partition processing using DBMS_PARALLEL_EXECUTE. New pParallelDegree parameter (1-16, default 1) for EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures. Each year/month partition processed in separate thread for improved performance.' || CHR(10) ||
'v2.2.0 (2025-12-19): DRY refactoring - extracted shared helper functions (sanitizeFilename, VALIDATE_TABLE_AND_COLUMNS, GET_PARTITIONS, EXPORT_SINGLE_PARTITION worker procedure). Reduced code duplication by ~400 lines. Prepared architecture for v2.3.0 parallel processing.' || CHR(10) ||
'v2.1.1 (2025-12-04): Fixed JOIN column reference A_WORKFLOW_HISTORY_KEY -> A_ETL_LOAD_SET_KEY, added consistent column mapping and dynamic column list to EXPORT_TABLE_DATA procedure, enhanced DEBUG logging for all export operations' || CHR(10) ||
'v2.1.0 (2025-10-22): Added version tracking and PARTITION_YEAR/PARTITION_MONTH support' || CHR(10) ||
'v2.0.0 (2025-10-01): Separated export functionality from FILE_MANAGER package' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into CSV file on OCI infrustructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports'
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17'
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

View File

@@ -1,40 +0,0 @@
-- ====================================================================
-- TRG_A_WORKFLOW_HISTORY Trigger Definition (rollback version)
-- ====================================================================
-- Purpose: Restore trigger to pre-MARS-1409 state
-- Handles only CT_ODS.A_LOAD_HISTORY inserts for ODS workflows
-- ====================================================================
CREATE OR REPLACE EDITIONABLE TRIGGER "CT_MRDS"."TRG_A_WORKFLOW_HISTORY"
AFTER INSERT OR UPDATE OF workflow_successful ON CT_MRDS.A_WORKFLOW_HISTORY
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
IF :new.workflow_name IN ('w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL') AND :new.service_name = 'ODS' THEN
IF :new.workflow_successful <> :old.workflow_successful AND :new.workflow_successful = 'Y' THEN
IF :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN
v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
ELSE
v_workflow_name := :new.workflow_name;
END IF;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
INSERT INTO CT_ODS.A_LOAD_HISTORY (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end,
exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end,
NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
END;
/

View File

@@ -18,6 +18,8 @@ DECLARE
v_env_manager_build VARCHAR2(100);
v_file_archiver_version VARCHAR2(50);
v_file_archiver_build VARCHAR2(100);
v_data_exporter_version VARCHAR2(50);
v_data_exporter_build VARCHAR2(500);
BEGIN
-- Get FILE_MANAGER version
BEGIN
@@ -55,6 +57,18 @@ BEGIN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve FILE_ARCHIVER version');
END;
-- Get DATA_EXPORTER version
BEGIN
v_data_exporter_version := CT_MRDS.DATA_EXPORTER.GET_VERSION();
v_data_exporter_build := CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO();
DBMS_OUTPUT.PUT_LINE('DATA_EXPORTER Version: ' || v_data_exporter_version);
DBMS_OUTPUT.PUT_LINE('DATA_EXPORTER Build: ' || v_data_exporter_build);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve DATA_EXPORTER version');
END;
-- Insert version records into A_PACKAGE_VERSION_TRACKING
BEGIN
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
@@ -78,6 +92,13 @@ BEGIN
USING 'CT_MRDS', 'FILE_ARCHIVER', 'BOTH', v_file_archiver_version,
'', '', 'MARS-1409';
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
(PACKAGE_OWNER, PACKAGE_NAME, PACKAGE_TYPE, PACKAGE_VERSION,
PACKAGE_BUILD_DATE, PACKAGE_AUTHOR, TRACKING_DATE, TRACKED_BY_USER, TRACKED_BY_MODULE)
VALUES (:1, :2, :3, :4, :5, :6, SYSTIMESTAMP, USER, :7)'
USING 'CT_MRDS', 'DATA_EXPORTER', 'BOTH', v_data_exporter_version,
'', '', 'MARS-1409';
COMMIT;
DBMS_OUTPUT.PUT_LINE('Package version tracking recorded successfully');
EXCEPTION

View File

@@ -26,12 +26,25 @@ PROMPT CT_MRDS.ENV_MANAGER Package:
SELECT CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.ENV_MANAGER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- FILE_ARCHIVER version
PROMPT
PROMPT CT_MRDS.FILE_ARCHIVER Package:
SELECT CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.FILE_ARCHIVER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- DATA_EXPORTER version
PROMPT
PROMPT CT_MRDS.DATA_EXPORTER Package:
SELECT CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- Package compilation status
PROMPT
PROMPT Package Compilation Status:
SELECT object_name, object_type, status, last_ddl_time
FROM user_objects
WHERE object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_name, object_type;
@@ -39,8 +52,9 @@ ORDER BY object_name, object_type;
PROMPT
PROMPT Compilation Errors (if any):
SELECT name, type, line, position, text
FROM user_errors
WHERE name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
ORDER BY name, type, line, position;
PROMPT

View File

@@ -0,0 +1,5 @@
# Exclude temporary folders from version control
confluence/
log/
test/
mock_data/

View File

@@ -0,0 +1,55 @@
-- ============================================================================
-- MARS-1005-PREHOOK Installation Script 00: DATA_EXPORTER Package
-- ============================================================================
-- Purpose: Deploy updated DATA_EXPORTER package (SPEC + BODY) v2.17.0
-- PARQUET FIX: Added pFormat parameter to buildQueryWithDateFormats.
-- REPLACE(col,CHR(34)) now applied only when pFormat=CSV.
-- EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being
-- corrupted (single " doubled to "") in Parquet binary files.
-- v2.16.0 RFC 4180 FIX remains intact for CSV path.
-- Schema: CT_MRDS
-- Object: PACKAGE DATA_EXPORTER
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT ============================================================================
PROMPT MARS-1005-PREHOOK: Installing CT_MRDS.DATA_EXPORTER Package
PROMPT ============================================================================
PROMPT Package: CT_MRDS.DATA_EXPORTER
PROMPT Version: 2.16.0 -> 2.17.0
PROMPT Change: PARQUET FIX - pFormat param added to buildQueryWithDateFormats.
PROMPT REPLACE(col,CHR(34)) applied only when pFormat=CSV.
PROMPT Parquet path no longer corrupts strings containing double quotes.
PROMPT ============================================================================
PROMPT
PROMPT Step 1: Deploy Package Specification
PROMPT ============================================================================
@@new_version\DATA_EXPORTER.pkg
PROMPT
PROMPT Package specification deployment completed.
PROMPT
PROMPT
PROMPT Step 2: Deploy Package Body
PROMPT ============================================================================
@@new_version\DATA_EXPORTER.pkb
PROMPT
PROMPT Package body deployment completed.
PROMPT
PROMPT
PROMPT ============================================================================
PROMPT DATA_EXPORTER Package installation completed (v2.17.0)
PROMPT ============================================================================
PROMPT
--=============================================================================================================================
-- End of Script
--=============================================================================================================================

View File

@@ -0,0 +1,49 @@
-- ============================================================================
-- MARS-1005-PREHOOK Rollback Script 90: DATA_EXPORTER Package
-- ============================================================================
-- Purpose: Restore DATA_EXPORTER package (SPEC + BODY) to v2.6.3
-- Reverting the RFC 4180 fix (escape=true removal).
-- Schema: CT_MRDS
-- Object: PACKAGE DATA_EXPORTER
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT ============================================================================
PROMPT MARS-1005-PREHOOK: Rolling back CT_MRDS.DATA_EXPORTER Package
PROMPT ============================================================================
PROMPT Package: CT_MRDS.DATA_EXPORTER
PROMPT Version: 2.15.0 -> 2.14.0 (ROLLBACK)
PROMPT Change: Restoring escape=true in DBMS_CLOUD.EXPORT_DATA CSV format
PROMPT ============================================================================
PROMPT
PROMPT Step 1: Restore Package Specification
PROMPT ============================================================================
@@rollback_version\DATA_EXPORTER.pkg
PROMPT
PROMPT Package specification rollback completed.
PROMPT
PROMPT
PROMPT Step 2: Restore Package Body
PROMPT ============================================================================
@@rollback_version\DATA_EXPORTER.pkb
PROMPT
PROMPT Package body rollback completed.
PROMPT
PROMPT
PROMPT ============================================================================
PROMPT DATA_EXPORTER Package rollback completed (v2.6.3 restored)
PROMPT ============================================================================
PROMPT
--=============================================================================================================================
-- End of Script
--=============================================================================================================================

View File

@@ -0,0 +1,115 @@
# MARS-1005-PREHOOK: Fix DATA_EXPORTER RFC 4180 Compliance + Parquet Format Support
## Overview
Pre-hook for MARS-1005. Deploys an updated `CT_MRDS.DATA_EXPORTER` package (v2.17.0)
that resolves two export format bugs:
1. **RFC 4180 compliance (v2.15.0 / v2.16.0):** Previous versions used `escape=true`
in `DBMS_CLOUD.EXPORT_DATA`, producing backslash-escaped embedded quotes (`\"`).
ODS external tables (`FIELDS CSV WITHOUT EMBEDDED`) expect RFC 4180 doubling (`""`).
Fix: removed `escape=true`; implemented `REPLACE(col, '"', '""')` in SELECT query.
2. **Parquet corruption fix (v2.17.0):** The RFC 4180 `REPLACE` was applied to all
export formats, including Parquet — corrupting values that contained double-quotes.
Fix: added `pFormat` parameter to `buildQueryWithDateFormats`; `REPLACE` is now
applied only when `pFormat = 'CSV'`. Parquet exports pass column values unchanged.
## Contents
| File | Description |
|------|-------------|
| `install_mars1005_prehook.sql` | Master installation script (SPOOL, ACCEPT, quit) |
| `rollback_mars1005_prehook.sql` | Master rollback script (SPOOL, ACCEPT, quit) |
| `00_MARS_1005_PREHOOK_install_DATA_EXPORTER.sql` | Deploy DATA_EXPORTER v2.17.0 |
| `90_MARS_1005_PREHOOK_rollback_DATA_EXPORTER.sql` | Restore DATA_EXPORTER v2.14.0 |
| `track_package_versions.sql` | Universal version tracking script |
| `verify_packages_version.sql` | Universal package verification script |
| `new_version/DATA_EXPORTER.pkg` | Package specification v2.17.0 |
| `new_version/DATA_EXPORTER.pkb` | Package body v2.17.0 |
| `rollback_version/DATA_EXPORTER.pkg` | Package specification v2.14.0 (backup) |
| `rollback_version/DATA_EXPORTER.pkb` | Package body v2.14.0 (backup) |
| `README.md` | This file |
## Prerequisites
- Oracle Database 23ai
- `CT_MRDS.ENV_MANAGER` v3.1.0+
- ADMIN user with EXECUTE privileges on CT_MRDS schema
- Connection service: `ggmichalski_high`
## Version Change
| Package | Before | After |
|---------|--------|-------|
| `CT_MRDS.DATA_EXPORTER` | v2.14.0 | v2.17.0 |
## Installation
### Option 1: Master Script (Recommended)
```powershell
# Execute as ADMIN user for proper privilege management
Get-Content "MARS_Packages/REL03/MARS-1005-PREHOOK/install_mars1005_prehook.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
Log file created automatically: `log/INSTALL_MARS_1005_PREHOOK_<PDB>_<timestamp>.log`
### Option 2: Individual Scripts
```powershell
# Execute as ADMIN user
Get-Content "MARS_Packages/REL03/MARS-1005-PREHOOK/00_MARS_1005_PREHOOK_install_DATA_EXPORTER.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
## Verification
```sql
-- Verify package version
SELECT CT_MRDS.DATA_EXPORTER.GET_VERSION() FROM DUAL;
-- Expected: 2.17.0
-- Verify build info
SELECT CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO() FROM DUAL;
-- Check for compilation errors
SELECT * FROM ALL_ERRORS
WHERE OWNER = 'CT_MRDS'
AND NAME = 'DATA_EXPORTER';
-- Verify no untracked changes
SELECT CT_MRDS.ENV_MANAGER.CHECK_PACKAGE_CHANGES('CT_MRDS', 'DATA_EXPORTER') FROM DUAL;
-- Expected: OK
```
## Rollback
```powershell
# Execute as ADMIN user
Get-Content "MARS_Packages/REL03/MARS-1005-PREHOOK/rollback_mars1005_prehook.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
Rollback restores `CT_MRDS.DATA_EXPORTER` to v2.14.0 (re-enables `escape=true`).
## Testing
After installation, verify the Parquet and CSV export paths:
**CSV path (ODS):** Export a table, then SELECT from the corresponding ODS external
table. Values with embedded double-quotes should appear as single `"` characters
and not trigger `ORA-30653`.
**Parquet path (ARCHIVE):** Export a table containing values with embedded
double-quotes to the ARCHIVE bucket area. Download the Parquet file and confirm
the value is stored verbatim (no extra quotes).
## Expected Changes
- `CT_MRDS.DATA_EXPORTER`: v2.14.0 → v2.17.0
- No table structure changes
- No configuration changes
## Related
- MARS-1005: Export TOP allotment data (main issue)
- `CT_MRDS.DATA_EXPORTER` source: `MARS_Packages/mrds_elt-dev-database/mrds_elt-dev-database/database/CT_MRDS/packages/DATA_EXPORTER.sql`

View File

@@ -0,0 +1,91 @@
-- ===================================================================
-- MARS-1005-PREHOOK INSTALL SCRIPT: Fix DATA_EXPORTER RFC 4180 Compliance
-- ===================================================================
-- Purpose: Pre-hook for MARS-1005 - Deploy updated DATA_EXPORTER (v2.17.0)
-- that fixes RFC 4180 CSV compliance and Parquet format corruption.
-- Background: DATA_EXPORTER v2.14.0 uses escape=true in DBMS_CLOUD.EXPORT_DATA,
-- which produces backslash-escaped embedded quotes (\")
-- instead of RFC 4180 doubling ("").
-- v2.16.0 fixed CSV by applying REPLACE(col, '"', '""') in SELECT,
-- but this corrupted Parquet exports containing double-quote values.
-- v2.17.0 applies REPLACE only when pFormat = 'CSV', leaving
-- Parquet exports unchanged.
-- Author: Grzegorz Michalski
-- Date: 2026-03-10
-- Dynamic spool file generation (using SYS_CONTEXT - no DBA privileges required)
-- Log files are automatically created in log/ subdirectory
-- IMPORTANT: Ensure log/ directory exists before SPOOL (use host mkdir)
host mkdir log 2>nul
var filename VARCHAR2(100)
BEGIN
:filename := 'log/INSTALL_MARS_1005_PREHOOK_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
SET ECHO OFF
SET TIMING ON
SET SERVEROUTPUT ON SIZE UNLIMITED
SET PAUSE OFF
PROMPT =========================================================================
PROMPT MARS-1005-PREHOOK: Fix DATA_EXPORTER RFC 4180 Compliance
PROMPT =========================================================================
PROMPT
PROMPT Problem: DATA_EXPORTER v2.14.0 uses escape=true which produces \"-escaped
PROMPT quotes instead of RFC 4180 doubling "". v2.16.0 fixed CSV but
PROMPT applied REPLACE to Parquet exports too, corrupting quote values.
PROMPT
PROMPT This script will:
PROMPT - Deploy CT_MRDS.DATA_EXPORTER v2.17.0 (RFC 4180 CSV + Parquet fix)
PROMPT - CSV exports use RFC 4180 doubling ("") for embedded quotes
PROMPT - Parquet exports pass column values unchanged (no REPLACE applied)
PROMPT - ODS external tables (FIELDS CSV WITHOUT EMBEDDED) are NOT modified
PROMPT
PROMPT Expected Duration: 1-2 minutes
PROMPT =========================================================================
-- Confirm installation with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with installation, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20999, 'Installation aborted by user.');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT =========================================================================
PROMPT Step 1: Deploy DATA_EXPORTER v2.17.0 (Parquet Fix + RFC 4180)
PROMPT =========================================================================
@@00_MARS_1005_PREHOOK_install_DATA_EXPORTER.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 2: Track Package Versions
PROMPT =========================================================================
@@track_package_versions.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 3: Verify Package Versions
PROMPT =========================================================================
@@verify_packages_version.sql
PROMPT
PROMPT =========================================================================
PROMPT MARS-1005-PREHOOK Installation - COMPLETED
PROMPT =========================================================================
PROMPT Check the log file for complete installation details.
PROMPT For rollback, use: rollback_mars1005_prehook.sql
PROMPT =========================================================================
spool off
quit;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,246 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Oracle DBMS_CLOUD has no native RFC 4180 doubling: escape=true uses backslash, no escape leaves raw quotes. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.15.0 (2026-03-10): INCOMPLETE FIX - Removed escape=true only; embedded quotes still unescaped. Superseded by v2.16.0.' || CHR(10) ||
'v2.14.0 (2026-02-25): OPTIMIZATION - Added pTaskName parameter to EXPORT_PARTITION_PARALLEL for deterministic filtering. Replaced FETCH FIRST 1 ROW ONLY safeguard with precise WHERE CHUNK_ID AND TASK_NAME filter. Eliminates ORDER BY overhead and provides cleaner session isolation.' || CHR(10) ||
'v2.13.1 (2026-02-25): CRITICAL FIX - Added START_ID and END_ID aliasses in CREATE_CHUNKS_BY_SQL to avoid ORA-00960 ambiguous column naming error.' || CHR(10) ||
'v2.13.0 (2026-02-25): CRITICAL SESSION ISOLATION FIX - Changed CREATE_CHUNKS_BY_NUMBER_COL to CREATE_CHUNKS_BY_SQL with TASK_NAME filter (fixes ORA-01422 in concurrent sessions). Added ORDER BY CREATED_DATE DESC FETCH FIRST 1 ROW safeguard to EXPORT_PARTITION_PARALLEL SELECT. Composite PK (TASK_NAME, CHUNK_ID) now fully functional.' || CHR(10) ||
'v2.12.0 (2026-02-24): CRITICAL FIX - Rewritten DELETE_FAILED_EXPORT_FILE to use file-specific pattern matching (prevents deleting parallel CSV chunks in shared folder). Added vQuery logging before DBMS_CLOUD calls. Added CSV maxfilesize logging.' || CHR(10) ||
'v2.11.0 (2026-02-18): Added pJobClass parameter to EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE for Oracle Scheduler job class support (resource/priority management).' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
* @param pTaskName - Task name for session isolation (optional, DEFAULT NULL for backward compatibility)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER,
pTaskName IN VARCHAR2 DEFAULT NULL
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into single CSV file on OCI infrastructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* Supports template table for column order and per-column date formatting.
* When pRegisterExport=TRUE, successfully exported file is registered in:
* - CT_MRDS.A_SOURCE_FILE_RECEIVED (tracks file location, size, checksum, and metadata)
* @param pFileName - Optional filename (e.g., 'export.csv'). NULL = auto-generate from table name
* @param pTemplateTableName - Optional template table (SCHEMA.TABLE or TABLE) for:
* - Column order control (template defines CSV structure)
* - Per-column date formatting via FILE_MANAGER.GET_DATE_FORMAT
* - NULL = use source table columns in natural order
* @param pMaxFileSize - Maximum file size in bytes (default 104857600 = 100MB, min 10MB, max 1GB)
* @param pRegisterExport - When TRUE, registers exported CSV file in A_SOURCE_FILE_RECEIVED table
* @param pProcessName - Process name stored in PROCESS_NAME column (default 'DATA_EXPORTER')
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports',
* pFileName => 'my_export.csv', -- Optional
* pTemplateTableName => 'CT_ET_TEMPLATES.MY_TEMPLATE', -- Optional
* pMaxFileSize => 104857600, -- Optional, default 100MB
* pRegisterExport => TRUE -- Optional, default FALSE
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 default NULL,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pRegisterExport IN BOOLEAN default FALSE,
pProcessName IN VARCHAR2 default 'DATA_EXPORTER',
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pJobClass IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* When pRegisterExport=TRUE, successfully exported files are registered in:
* - CT_MRDS.A_SOURCE_FILE_RECEIVED (tracks file location, size, checksum, and metadata)
* @param pProcessName - Process name stored in PROCESS_NAME column (default 'DATA_EXPORTER')
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8, -- Optional, default 1, range 1-16
* pRegisterExport => TRUE -- Optional, default FALSE, registers to A_SOURCE_FILE_RECEIVED
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17',
* pRegisterExport => TRUE -- Registers each export to A_SOURCE_FILE_RECEIVED table
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pRegisterExport IN BOOLEAN default FALSE,
pProcessName IN VARCHAR2 default 'DATA_EXPORTER',
pJobClass IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

View File

@@ -0,0 +1,66 @@
-- ===================================================================
-- MARS-1005-PREHOOK ROLLBACK SCRIPT: Restore DATA_EXPORTER v2.14.0
-- ===================================================================
-- Purpose: Rollback for MARS-1005-PREHOOK - Restore DATA_EXPORTER to v2.14.0
-- Author: Grzegorz Michalski
-- Date: 2026-03-10
-- Dynamic spool file generation (using SYS_CONTEXT - no DBA privileges required)
-- IMPORTANT: Ensure log/ directory exists before SPOOL (use host mkdir)
host mkdir log 2>nul
var filename VARCHAR2(100)
BEGIN
:filename := 'log/ROLLBACK_MARS_1005_PREHOOK_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
SET ECHO OFF
SET TIMING ON
SET SERVEROUTPUT ON SIZE UNLIMITED
SET PAUSE OFF
PROMPT =========================================================================
PROMPT MARS-1005-PREHOOK: Rollback - Restore DATA_EXPORTER v2.14.0
PROMPT =========================================================================
PROMPT This will reverse all changes from MARS-1005-PREHOOK installation.
PROMPT
PROMPT Rollback steps:
PROMPT 1. Restore CT_MRDS.DATA_EXPORTER to v2.14.0
PROMPT 2. Verify package versions
PROMPT =========================================================================
-- Confirm rollback with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with rollback, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20999, 'Rollback aborted by user.');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT =========================================================================
PROMPT Step 1: Restore DATA_EXPORTER v2.14.0
PROMPT =========================================================================
@@90_MARS_1005_PREHOOK_rollback_DATA_EXPORTER.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 2: Verify Package Versions
PROMPT =========================================================================
@@verify_packages_version.sql
PROMPT
PROMPT =========================================================================
PROMPT MARS-1005-PREHOOK Rollback - COMPLETED
PROMPT =========================================================================
spool off
quit;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,243 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.14.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-25 09:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.14.0 (2026-02-25): OPTIMIZATION - Added pTaskName parameter to EXPORT_PARTITION_PARALLEL for deterministic filtering. Replaced FETCH FIRST 1 ROW ONLY safeguard with precise WHERE CHUNK_ID AND TASK_NAME filter. Eliminates ORDER BY overhead and provides cleaner session isolation.' || CHR(10) ||
'v2.13.1 (2026-02-25): CRITICAL FIX - Added START_ID and END_ID aliasses in CREATE_CHUNKS_BY_SQL to avoid ORA-00960 ambiguous column naming error.' || CHR(10) ||
'v2.13.0 (2026-02-25): CRITICAL SESSION ISOLATION FIX - Changed CREATE_CHUNKS_BY_NUMBER_COL to CREATE_CHUNKS_BY_SQL with TASK_NAME filter (fixes ORA-01422 in concurrent sessions). Added ORDER BY CREATED_DATE DESC FETCH FIRST 1 ROW safeguard to EXPORT_PARTITION_PARALLEL SELECT. Composite PK (TASK_NAME, CHUNK_ID) now fully functional.' || CHR(10) ||
'v2.12.0 (2026-02-24): CRITICAL FIX - Rewritten DELETE_FAILED_EXPORT_FILE to use file-specific pattern matching (prevents deleting parallel CSV chunks in shared folder). Added vQuery logging before DBMS_CLOUD calls. Added CSV maxfilesize logging.' || CHR(10) ||
'v2.11.0 (2026-02-18): Added pJobClass parameter to EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE for Oracle Scheduler job class support (resource/priority management).' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
* @param pTaskName - Task name for session isolation (optional, DEFAULT NULL for backward compatibility)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER,
pTaskName IN VARCHAR2 DEFAULT NULL
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into single CSV file on OCI infrastructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* Supports template table for column order and per-column date formatting.
* When pRegisterExport=TRUE, successfully exported file is registered in:
* - CT_MRDS.A_SOURCE_FILE_RECEIVED (tracks file location, size, checksum, and metadata)
* @param pFileName - Optional filename (e.g., 'export.csv'). NULL = auto-generate from table name
* @param pTemplateTableName - Optional template table (SCHEMA.TABLE or TABLE) for:
* - Column order control (template defines CSV structure)
* - Per-column date formatting via FILE_MANAGER.GET_DATE_FORMAT
* - NULL = use source table columns in natural order
* @param pMaxFileSize - Maximum file size in bytes (default 104857600 = 100MB, min 10MB, max 1GB)
* @param pRegisterExport - When TRUE, registers exported CSV file in A_SOURCE_FILE_RECEIVED table
* @param pProcessName - Process name stored in PROCESS_NAME column (default 'DATA_EXPORTER')
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports',
* pFileName => 'my_export.csv', -- Optional
* pTemplateTableName => 'CT_ET_TEMPLATES.MY_TEMPLATE', -- Optional
* pMaxFileSize => 104857600, -- Optional, default 100MB
* pRegisterExport => TRUE -- Optional, default FALSE
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 default NULL,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pRegisterExport IN BOOLEAN default FALSE,
pProcessName IN VARCHAR2 default 'DATA_EXPORTER',
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pJobClass IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* When pRegisterExport=TRUE, successfully exported files are registered in:
* - CT_MRDS.A_SOURCE_FILE_RECEIVED (tracks file location, size, checksum, and metadata)
* @param pProcessName - Process name stored in PROCESS_NAME column (default 'DATA_EXPORTER')
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8, -- Optional, default 1, range 1-16
* pRegisterExport => TRUE -- Optional, default FALSE, registers to A_SOURCE_FILE_RECEIVED
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17',
* pRegisterExport => TRUE -- Registers each export to A_SOURCE_FILE_RECEIVED table
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pRegisterExport IN BOOLEAN default FALSE,
pProcessName IN VARCHAR2 default 'DATA_EXPORTER',
pJobClass IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

View File

@@ -0,0 +1,86 @@
-- ===================================================================
-- Simple Package Version Tracking Script
-- ===================================================================
-- Purpose: Track specified Oracle package versions
-- Author: Grzegorz Michalski
-- Date: 2026-03-11
-- Version: 3.1.0 - List-Based Edition
--
-- USAGE:
-- 1. Edit package list below (add/remove packages as needed)
-- 2. Include in your install/rollback script: @@track_package_versions.sql
-- ===================================================================
SET SERVEROUTPUT ON;
DECLARE
TYPE t_package_rec IS RECORD (
owner VARCHAR2(50),
package_name VARCHAR2(50),
version VARCHAR2(50)
);
TYPE t_packages IS TABLE OF t_package_rec;
TYPE t_string_array IS TABLE OF VARCHAR2(100);
-- ===================================================================
-- PACKAGE LIST - Edit this array to specify packages to track
-- ===================================================================
-- Add or remove entries as needed for your MARS issue
-- Format: 'SCHEMA.PACKAGE_NAME'
-- ===================================================================
vPackageList t_string_array := t_string_array(
'CT_MRDS.DATA_EXPORTER'
);
-- ===================================================================
vPackages t_packages := t_packages();
vVersion VARCHAR2(50);
vCount NUMBER := 0;
vOwner VARCHAR2(50);
vPackageName VARCHAR2(50);
vDotPos NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('========================================');
DBMS_OUTPUT.PUT_LINE('Package Version Tracking');
DBMS_OUTPUT.PUT_LINE('========================================');
-- Process each package in the list
FOR i IN 1..vPackageList.COUNT LOOP
vDotPos := INSTR(vPackageList(i), '.');
IF vDotPos > 0 THEN
vOwner := SUBSTR(vPackageList(i), 1, vDotPos - 1);
vPackageName := SUBSTR(vPackageList(i), vDotPos + 1);
ELSE
vOwner := USER; -- Default to current user if no schema specified
vPackageName := vPackageList(i);
END IF;
BEGIN
-- Get package version
EXECUTE IMMEDIATE
'SELECT ' || vOwner || '.' || vPackageName || '.GET_VERSION() FROM DUAL'
INTO vVersion;
-- Track the version
CT_MRDS.ENV_MANAGER.TRACK_PACKAGE_VERSION(
pPackageOwner => vOwner,
pPackageName => vPackageName,
pPackageVersion => vVersion,
pPackageBuildDate => NULL, -- Will be retrieved from package
pPackageAuthor => NULL -- Will be retrieved from package
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: Tracked ' || vOwner || '.' || vPackageName || ' v' || vVersion);
vCount := vCount + 1;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR tracking ' || vOwner || '.' || vPackageName || ': ' || SQLERRM);
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('========================================');
DBMS_OUTPUT.PUT_LINE('Tracked ' || vCount || ' of ' || vPackageList.COUNT || ' packages successfully');
DBMS_OUTPUT.PUT_LINE('========================================');
END;
/

View File

@@ -0,0 +1,62 @@
-- ===================================================================
-- Universal Package Version Verification Script
-- ===================================================================
-- Purpose: Verify all tracked Oracle packages for code changes
-- Author: Grzegorz Michalski
-- Date: 2026-03-11
-- Version: 1.0.0
--
-- USAGE:
-- Include at the end of install/rollback scripts: @@verify_packages_version.sql
--
-- OUTPUT:
-- - List of all tracked packages with their current status
-- - OK: Package has not changed since last tracking
-- - WARNING: Package code changed without version update
-- ===================================================================
SET LINESIZE 200
SET PAGESIZE 1000
SET FEEDBACK OFF
PROMPT
PROMPT ========================================
PROMPT Package Version Verification
PROMPT ========================================
PROMPT
COLUMN PACKAGE_OWNER FORMAT A15
COLUMN PACKAGE_NAME FORMAT A20
COLUMN VERSION FORMAT A10
COLUMN STATUS FORMAT A80
SELECT
PACKAGE_OWNER,
PACKAGE_NAME,
PACKAGE_VERSION AS VERSION,
CT_MRDS.ENV_MANAGER.CHECK_PACKAGE_CHANGES(PACKAGE_OWNER, PACKAGE_NAME) AS STATUS
FROM (
SELECT
PACKAGE_OWNER,
PACKAGE_NAME,
PACKAGE_VERSION,
ROW_NUMBER() OVER (PARTITION BY PACKAGE_OWNER, PACKAGE_NAME ORDER BY TRACKING_DATE DESC) AS RN
FROM CT_MRDS.A_PACKAGE_VERSION_TRACKING
)
WHERE RN = 1
ORDER BY PACKAGE_OWNER, PACKAGE_NAME;
PROMPT
PROMPT ========================================
PROMPT Verification Complete
PROMPT ========================================
PROMPT
PROMPT Legend:
PROMPT OK - Package has not changed since last tracking
PROMPT WARNING - Package code changed without version update
PROMPT
PROMPT For detailed hash information, use:
PROMPT SELECT ENV_MANAGER.GET_PACKAGE_HASH_INFO('OWNER', 'PACKAGE') FROM DUAL;
PROMPT ========================================
SET FEEDBACK ON

View File

@@ -0,0 +1,5 @@
# Exclude temporary folders from version control
confluence/
log/
test/
mock_data/

View File

@@ -0,0 +1,618 @@
-- =====================================================================================
-- Script: 01_MARS_1005_export_top_data.sql
-- Purpose: Export OU_TOP historical data to ODS bucket (DATA bucket, CSV format)
-- Author: Grzegorz Michalski
-- Created: 2026-03-06
-- MARS Issue: MARS-1005
-- Target: mrds_data_dev/ODS/TOP/
-- Tables:
-- 1. OU_TOP.LEGACY_ALLOTMENT -> ODS/TOP/TOP_ALLOTMENT
-- 2. OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER
-- 3. OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM
-- 4. OU_TOP.LEGACY_ANNOUNCEMENT -> ODS/TOP/TOP_ANNOUNCEMENT
-- 5. OU_TOP.LEGACY_FBL_ITEM -> ODS/TOP/TOP_FULLBIDLIST_ITEM
-- 6. OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED -> ODS/TOP/TOP_FULLBID_ARRAY_COMPILED
-- =====================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED;
SET TIMING ON;
PROMPT =====================================================================================
PROMPT MARS-1005: OU_TOP Historical Data Export
PROMPT =====================================================================================
PROMPT Export Strategy:
PROMPT - Source: OU_TOP schema tables (operational database)
PROMPT - Target: DATA/ODS bucket as CSV files
PROMPT - Method: DATA_EXPORTER.EXPORT_TABLE_DATA
PROMPT - Registration: Files registered in A_SOURCE_FILE_RECEIVED
PROMPT - Path Structure: ODS/TOP/TOP_*/
PROMPT Tables (6):
PROMPT ALLOTMENT, ALLOTMENT_MODIFICATION_HEADER, ALLOTMENT_MODIFICATION_ITEM,
PROMPT ANNOUNCEMENT, FBL_ITEM (->TOP_FULLBIDLIST_ITEM), FULLBID_ARRAY_COMPILED
PROMPT =====================================================================================
-- Log export start
INSERT INTO CT_MRDS.A_PROCESS_LOG (PROCESS_NAME, PROCEDURE_NAME, LOG_LEVEL, LOG_MESSAGE, PROCEDURE_PARAMETERS)
VALUES ('MARS-1005', 'EXPORT_TOP_DATA', 'INFO', 'Starting historical OU_TOP data export',
'Tables: ALLOTMENT, ALLOTMENT_MODIFICATION_HEADER, ALLOTMENT_MODIFICATION_ITEM, ANNOUNCEMENT, FBL_ITEM, FULLBID_ARRAY_COMPILED');
PROMPT
PROMPT =====================================================================================
PROMPT PRE-EXPORT CHECK: Verify Existing Files in ODS Bucket
PROMPT =====================================================================================
-- Helper procedure (inline) to check one folder
-- Check 1: ALLOTMENT
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
vPrintCount NUMBER := 0;
BEGIN
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/TOP/TOP_ALLOTMENT/';
SELECT COUNT(*) INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri))
WHERE object_name NOT LIKE '%/';
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: TOP_ALLOTMENT files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri || ' Files found: ' || vFileCount);
FOR rec IN (SELECT object_name, bytes FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri)) WHERE object_name NOT LIKE '%/' ORDER BY object_name) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes)');
vPrintCount := vPrintCount + 1;
EXIT WHEN vPrintCount >= 10;
END LOOP;
IF vFileCount > 10 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vFileCount - 10) || ' more file(s)');
END IF;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('>>> Records via external table: ' || vRecordCount);
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count via external table: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing TOP_ALLOTMENT files found - bucket is clean');
END IF;
END;
/
-- Check 2: ALLOTMENT_MODIFICATION_HEADER
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
vPrintCount NUMBER := 0;
BEGIN
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER/';
SELECT COUNT(*) INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri))
WHERE object_name NOT LIKE '%/';
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: TOP_ALLOTMENT_MODIFICATION_HEADER files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri || ' Files found: ' || vFileCount);
FOR rec IN (SELECT object_name, bytes FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri)) WHERE object_name NOT LIKE '%/' ORDER BY object_name) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes)');
vPrintCount := vPrintCount + 1;
EXIT WHEN vPrintCount >= 10;
END LOOP;
IF vFileCount > 10 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vFileCount - 10) || ' more file(s)');
END IF;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_MODIFICATION_HEADER_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('>>> Records via external table: ' || vRecordCount);
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count via external table: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing TOP_ALLOTMENT_MODIFICATION_HEADER files found - bucket is clean');
END IF;
END;
/
-- Check 3: ALLOTMENT_MODIFICATION_ITEM
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
vPrintCount NUMBER := 0;
BEGIN
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM/';
SELECT COUNT(*) INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri))
WHERE object_name NOT LIKE '%/';
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: TOP_ALLOTMENT_MODIFICATION_ITEM files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri || ' Files found: ' || vFileCount);
FOR rec IN (SELECT object_name, bytes FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri)) WHERE object_name NOT LIKE '%/' ORDER BY object_name) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes)');
vPrintCount := vPrintCount + 1;
EXIT WHEN vPrintCount >= 10;
END LOOP;
IF vFileCount > 10 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vFileCount - 10) || ' more file(s)');
END IF;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_MODIFICATION_ITEM_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('>>> Records via external table: ' || vRecordCount);
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count via external table: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing TOP_ALLOTMENT_MODIFICATION_ITEM files found - bucket is clean');
END IF;
END;
/
-- Check 4: ANNOUNCEMENT
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
vPrintCount NUMBER := 0;
BEGIN
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/TOP/TOP_ANNOUNCEMENT/';
SELECT COUNT(*) INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri))
WHERE object_name NOT LIKE '%/';
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: TOP_ANNOUNCEMENT files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri || ' Files found: ' || vFileCount);
FOR rec IN (SELECT object_name, bytes FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri)) WHERE object_name NOT LIKE '%/' ORDER BY object_name) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes)');
vPrintCount := vPrintCount + 1;
EXIT WHEN vPrintCount >= 10;
END LOOP;
IF vFileCount > 10 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vFileCount - 10) || ' more file(s)');
END IF;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.TOP_ANNOUNCEMENT_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('>>> Records via external table: ' || vRecordCount);
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count via external table: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing TOP_ANNOUNCEMENT files found - bucket is clean');
END IF;
END;
/
-- Check 5: FBL_ITEM (folder: TOP_FULLBIDLIST_ITEM)
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
vPrintCount NUMBER := 0;
BEGIN
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/TOP/TOP_FULLBIDLIST_ITEM/';
SELECT COUNT(*) INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri))
WHERE object_name NOT LIKE '%/';
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: TOP_FULLBIDLIST_ITEM files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri || ' Files found: ' || vFileCount);
FOR rec IN (SELECT object_name, bytes FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri)) WHERE object_name NOT LIKE '%/' ORDER BY object_name) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes)');
vPrintCount := vPrintCount + 1;
EXIT WHEN vPrintCount >= 10;
END LOOP;
IF vFileCount > 10 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vFileCount - 10) || ' more file(s)');
END IF;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.TOP_FULLBIDLIST_ITEM_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('>>> Records via external table: ' || vRecordCount);
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count via external table: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing TOP_FULLBIDLIST_ITEM files found - bucket is clean');
END IF;
END;
/
-- Check 6: FULLBID_ARRAY_COMPILED
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
vPrintCount NUMBER := 0;
BEGIN
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/TOP/TOP_FULLBID_ARRAY_COMPILED/';
SELECT COUNT(*) INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri))
WHERE object_name NOT LIKE '%/';
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: TOP_FULLBID_ARRAY_COMPILED files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri || ' Files found: ' || vFileCount);
FOR rec IN (SELECT object_name, bytes FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(credential_name => 'OCI$RESOURCE_PRINCIPAL', location_uri => vLocationUri)) WHERE object_name NOT LIKE '%/' ORDER BY object_name) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes)');
vPrintCount := vPrintCount + 1;
EXIT WHEN vPrintCount >= 10;
END LOOP;
IF vFileCount > 10 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vFileCount - 10) || ' more file(s)');
END IF;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.TOP_FULLBID_ARRAY_COMPILED_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('>>> Records via external table: ' || vRecordCount);
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count via external table: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing TOP_FULLBID_ARRAY_COMPILED files found - bucket is clean');
END IF;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT PRE-EXPORT: Verify Source and Target Table Readiness
PROMPT =====================================================================================
DECLARE
v1Source NUMBER := 0; v2Source NUMBER := 0; v3Source NUMBER := 0;
v4Source NUMBER := 0; v5Source NUMBER := 0; v6Source NUMBER := 0;
v1Target NUMBER := 0; v2Target NUMBER := 0; v3Target NUMBER := 0;
v4Target NUMBER := 0; v5Target NUMBER := 0; v6Target NUMBER := 0;
vTotalSource NUMBER := 0;
vTotalTarget NUMBER := 0;
-- safe_count: ONLY for ODS external tables
-- Returns 0 when no data file (ORA-29913, ORA-29400, KUP-13023); re-raises all other errors
PROCEDURE safe_count(pSql VARCHAR2, pResult OUT NUMBER) IS
BEGIN
EXECUTE IMMEDIATE pSql INTO pResult;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE IN (-29913, -29400) OR SQLERRM LIKE '%KUP-13023%' THEN
pResult := 0;
ELSE
RAISE;
END IF;
END;
BEGIN
-- Source counts (direct - if table does not exist, error propagates)
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT' INTO v1Source;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER' INTO v2Source;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM' INTO v3Source;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ANNOUNCEMENT' INTO v4Source;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_FBL_ITEM' INTO v5Source;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED' INTO v6Source;
vTotalSource := v1Source + v2Source + v3Source + v4Source + v5Source + v6Source;
DBMS_OUTPUT.PUT_LINE('Source table record counts (pre-export):');
DBMS_OUTPUT.PUT_LINE('- ALLOTMENT: ' || v1Source);
DBMS_OUTPUT.PUT_LINE('- ALLOTMENT_MODIFICATION_HEADER: ' || v2Source);
DBMS_OUTPUT.PUT_LINE('- ALLOTMENT_MODIFICATION_ITEM: ' || v3Source);
DBMS_OUTPUT.PUT_LINE('- ANNOUNCEMENT: ' || v4Source);
DBMS_OUTPUT.PUT_LINE('- FBL_ITEM: ' || v5Source);
DBMS_OUTPUT.PUT_LINE('- FULLBID_ARRAY_COMPILED: ' || v6Source);
DBMS_OUTPUT.PUT_LINE('- TOTAL SOURCE: ' || vTotalSource);
-- Target external table counts
safe_count('SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_ODS', v1Target);
safe_count('SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_MODIFICATION_HEADER_ODS', v2Target);
safe_count('SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_MODIFICATION_ITEM_ODS', v3Target);
safe_count('SELECT COUNT(*) FROM ODS.TOP_ANNOUNCEMENT_ODS', v4Target);
safe_count('SELECT COUNT(*) FROM ODS.TOP_FULLBIDLIST_ITEM_ODS', v5Target);
safe_count('SELECT COUNT(*) FROM ODS.TOP_FULLBID_ARRAY_COMPILED_ODS', v6Target);
vTotalTarget := GREATEST(v1Target,0) + GREATEST(v2Target,0) + GREATEST(v3Target,0)
+ GREATEST(v4Target,0) + GREATEST(v5Target,0) + GREATEST(v6Target,0);
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Target external table record counts (pre-export):');
DBMS_OUTPUT.PUT_LINE('- TOP_ALLOTMENT_ODS: ' || v1Target);
DBMS_OUTPUT.PUT_LINE('- TOP_ALLOTMENT_MODIFICATION_HEADER_ODS: ' || v2Target);
DBMS_OUTPUT.PUT_LINE('- TOP_ALLOTMENT_MODIFICATION_ITEM_ODS: ' || v3Target);
DBMS_OUTPUT.PUT_LINE('- TOP_ANNOUNCEMENT_ODS: ' || v4Target);
DBMS_OUTPUT.PUT_LINE('- TOP_FULLBIDLIST_ITEM_ODS: ' || v5Target);
DBMS_OUTPUT.PUT_LINE('- TOP_FULLBID_ARRAY_COMPILED_ODS: ' || v6Target);
IF vTotalSource > 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: Source tables contain data - ready for export');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: Source tables exist but contain no data - export will produce empty files');
END IF;
IF vTotalTarget = 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: Target external tables are clean - ready for fresh export');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: Target tables contain ' || vTotalTarget || ' records - may be re-run');
END IF;
DBMS_OUTPUT.PUT_LINE('Proceeding with export...');
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT TABLE 1/6: OU_TOP.LEGACY_ALLOTMENT -> ODS/TOP/TOP_ALLOTMENT
PROMPT =====================================================================================
BEGIN
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA(
pSchemaName => 'OU_TOP',
pTableName => 'LEGACY_ALLOTMENT',
pKeyColumnName => 'A_ETL_LOAD_SET_FK', -- ETL key for data lookup
pBucketArea => 'ODS',
pFolderName => 'ODS/TOP/TOP_ALLOTMENT',
pTemplateTableName => 'CT_ET_TEMPLATES.TOP_ALLOTMENT',
pMaxFileSize => 104857600,
pRegisterExport => TRUE,
pProcessName => 'MARS-1005'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: ALLOTMENT export completed successfully');
EXCEPTION
WHEN OTHERS THEN
DECLARE vErrorMsg VARCHAR2(4000) := SUBSTR(SQLERRM, 1, 4000); BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: ALLOTMENT export failed: ' || vErrorMsg);
INSERT INTO CT_MRDS.A_PROCESS_LOG (guid, Username, Osuser, Machine, Module, process_name, procedure_name, procedure_parameters, log_level, log_message)
VALUES ('MARS-1005', USER, SYS_CONTEXT('USERENV','OS_USER'), SYS_CONTEXT('USERENV','HOST'),
'MARS-1005', 'MARS-1005', 'EXPORT_ALLOTMENT', NULL, 'ERROR', 'Export failed: ' || vErrorMsg);
COMMIT;
END;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT TABLE 2/6: OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER
PROMPT =====================================================================================
BEGIN
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA(
pSchemaName => 'OU_TOP',
pTableName => 'LEGACY_ALLOTMENT_MODIFICATION_HEADER',
pKeyColumnName => 'A_ETL_LOAD_SET_FK', -- ETL key for data lookup
pBucketArea => 'ODS',
pFolderName => 'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER',
pTemplateTableName => 'CT_ET_TEMPLATES.TOP_ALLOTMENT_MODIFICATION_HEADER',
pMaxFileSize => 104857600,
pRegisterExport => TRUE,
pProcessName => 'MARS-1005'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: ALLOTMENT_MODIFICATION_HEADER export completed successfully');
EXCEPTION
WHEN OTHERS THEN
DECLARE vErrorMsg VARCHAR2(4000) := SUBSTR(SQLERRM, 1, 4000); BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: ALLOTMENT_MODIFICATION_HEADER export failed: ' || vErrorMsg);
INSERT INTO CT_MRDS.A_PROCESS_LOG (guid, Username, Osuser, Machine, Module, process_name, procedure_name, procedure_parameters, log_level, log_message)
VALUES ('MARS-1005', USER, SYS_CONTEXT('USERENV','OS_USER'), SYS_CONTEXT('USERENV','HOST'),
'MARS-1005', 'MARS-1005', 'EXPORT_ALLOTMENT_MOD_HEADER', NULL, 'ERROR', 'Export failed: ' || vErrorMsg);
COMMIT;
END;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT TABLE 3/6: OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM
PROMPT =====================================================================================
BEGIN
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA(
pSchemaName => 'OU_TOP',
pTableName => 'LEGACY_ALLOTMENT_MODIFICATION_ITEM',
pKeyColumnName => 'A_ETL_LOAD_SET_FK', -- ETL key for data lookup
pBucketArea => 'ODS',
pFolderName => 'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM',
pTemplateTableName => 'CT_ET_TEMPLATES.TOP_ALLOTMENT_MODIFICATION_ITEM',
pMaxFileSize => 104857600,
pRegisterExport => TRUE,
pProcessName => 'MARS-1005'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: ALLOTMENT_MODIFICATION_ITEM export completed successfully');
EXCEPTION
WHEN OTHERS THEN
DECLARE vErrorMsg VARCHAR2(4000) := SUBSTR(SQLERRM, 1, 4000); BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: ALLOTMENT_MODIFICATION_ITEM export failed: ' || vErrorMsg);
INSERT INTO CT_MRDS.A_PROCESS_LOG (guid, Username, Osuser, Machine, Module, process_name, procedure_name, procedure_parameters, log_level, log_message)
VALUES ('MARS-1005', USER, SYS_CONTEXT('USERENV','OS_USER'), SYS_CONTEXT('USERENV','HOST'),
'MARS-1005', 'MARS-1005', 'EXPORT_ALLOTMENT_MOD_ITEM', NULL, 'ERROR', 'Export failed: ' || vErrorMsg);
COMMIT;
END;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT TABLE 4/6: OU_TOP.LEGACY_ANNOUNCEMENT -> ODS/TOP/TOP_ANNOUNCEMENT
PROMPT =====================================================================================
BEGIN
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA(
pSchemaName => 'OU_TOP',
pTableName => 'LEGACY_ANNOUNCEMENT',
pKeyColumnName => 'A_ETL_LOAD_SET_FK', -- ETL key for data lookup
pBucketArea => 'ODS',
pFolderName => 'ODS/TOP/TOP_ANNOUNCEMENT',
pTemplateTableName => 'CT_ET_TEMPLATES.TOP_ANNOUNCEMENT',
pMaxFileSize => 104857600,
pRegisterExport => TRUE,
pProcessName => 'MARS-1005'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: ANNOUNCEMENT export completed successfully');
EXCEPTION
WHEN OTHERS THEN
DECLARE vErrorMsg VARCHAR2(4000) := SUBSTR(SQLERRM, 1, 4000); BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: ANNOUNCEMENT export failed: ' || vErrorMsg);
INSERT INTO CT_MRDS.A_PROCESS_LOG (guid, Username, Osuser, Machine, Module, process_name, procedure_name, procedure_parameters, log_level, log_message)
VALUES ('MARS-1005', USER, SYS_CONTEXT('USERENV','OS_USER'), SYS_CONTEXT('USERENV','HOST'),
'MARS-1005', 'MARS-1005', 'EXPORT_ANNOUNCEMENT', NULL, 'ERROR', 'Export failed: ' || vErrorMsg);
COMMIT;
END;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT TABLE 5/6: OU_TOP.LEGACY_FBL_ITEM -> ODS/TOP/TOP_FULLBIDLIST_ITEM
PROMPT =====================================================================================
BEGIN
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA(
pSchemaName => 'OU_TOP',
pTableName => 'LEGACY_FBL_ITEM',
pKeyColumnName => 'A_ETL_LOAD_SET_FK', -- ETL key for data lookup
pBucketArea => 'ODS',
pFolderName => 'ODS/TOP/TOP_FULLBIDLIST_ITEM',
pTemplateTableName => 'CT_ET_TEMPLATES.TOP_FULLBIDLIST_ITEM',
pMaxFileSize => 104857600,
pRegisterExport => TRUE,
pProcessName => 'MARS-1005'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: FBL_ITEM export completed successfully');
EXCEPTION
WHEN OTHERS THEN
DECLARE vErrorMsg VARCHAR2(4000) := SUBSTR(SQLERRM, 1, 4000); BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: FBL_ITEM export failed: ' || vErrorMsg);
INSERT INTO CT_MRDS.A_PROCESS_LOG (guid, Username, Osuser, Machine, Module, process_name, procedure_name, procedure_parameters, log_level, log_message)
VALUES ('MARS-1005', USER, SYS_CONTEXT('USERENV','OS_USER'), SYS_CONTEXT('USERENV','HOST'),
'MARS-1005', 'MARS-1005', 'EXPORT_FBL_ITEM', NULL, 'ERROR', 'Export failed: ' || vErrorMsg);
COMMIT;
END;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT TABLE 6/6: OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED -> ODS/TOP/TOP_FULLBID_ARRAY_COMPILED
PROMPT =====================================================================================
BEGIN
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA(
pSchemaName => 'OU_TOP',
pTableName => 'LEGACY_FULLBID_ARRAY_COMPILED',
pKeyColumnName => 'A_ETL_LOAD_SET_FK', -- ETL key for data lookup
pBucketArea => 'ODS',
pFolderName => 'ODS/TOP/TOP_FULLBID_ARRAY_COMPILED',
pTemplateTableName => 'CT_ET_TEMPLATES.TOP_FULLBID_ARRAY_COMPILED',
pMaxFileSize => 104857600,
pRegisterExport => TRUE,
pProcessName => 'MARS-1005'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: FULLBID_ARRAY_COMPILED export completed successfully');
EXCEPTION
WHEN OTHERS THEN
DECLARE vErrorMsg VARCHAR2(4000) := SUBSTR(SQLERRM, 1, 4000); BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: FULLBID_ARRAY_COMPILED export failed: ' || vErrorMsg);
INSERT INTO CT_MRDS.A_PROCESS_LOG (guid, Username, Osuser, Machine, Module, process_name, procedure_name, procedure_parameters, log_level, log_message)
VALUES ('MARS-1005', USER, SYS_CONTEXT('USERENV','OS_USER'), SYS_CONTEXT('USERENV','HOST'),
'MARS-1005', 'MARS-1005', 'EXPORT_FULLBID_ARRAY_COMPILED', NULL, 'ERROR', 'Export failed: ' || vErrorMsg);
COMMIT;
END;
END;
/
PROMPT
PROMPT =====================================================================================
PROMPT Export Summary - Checking Results
PROMPT =====================================================================================
-- Log completion
INSERT INTO CT_MRDS.A_PROCESS_LOG (PROCESS_NAME, PROCEDURE_NAME, LOG_LEVEL, LOG_MESSAGE)
VALUES ('MARS-1005', 'EXPORT_TOP_DATA', 'INFO', 'All OU_TOP historical exports completed successfully');
PROMPT
PROMPT =====================================================================================
PROMPT MARS-1005 OU_TOP Export Completed!
PROMPT =====================================================================================
PROMPT POST-EXPORT: Source vs Target Record Count Comparison
PROMPT =====================================================================================
DECLARE
v1S NUMBER := 0; v2S NUMBER := 0; v3S NUMBER := 0;
v4S NUMBER := 0; v5S NUMBER := 0; v6S NUMBER := 0;
v1T NUMBER := 0; v2T NUMBER := 0; v3T NUMBER := 0;
v4T NUMBER := 0; v5T NUMBER := 0; v6T NUMBER := 0;
vTotalS NUMBER := 0;
vTotalT NUMBER := 0;
vMismatch NUMBER := 0;
-- safe_count: ONLY for ODS external tables
-- Returns 0 when no data file (ORA-29913, ORA-29400, KUP-13023); re-raises all other errors
PROCEDURE safe_count(pSql VARCHAR2, pResult OUT NUMBER) IS
BEGIN
EXECUTE IMMEDIATE pSql INTO pResult;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE IN (-29913, -29400) OR SQLERRM LIKE '%KUP-13023%' THEN
pResult := 0;
ELSE
RAISE;
END IF;
END;
PROCEDURE print_row(pTable VARCHAR2, pSrc NUMBER, pTgt NUMBER) IS
BEGIN
DBMS_OUTPUT.PUT_LINE(
RPAD(pTable, 40) || ' | ' ||
RPAD(TO_CHAR(pSrc), 8) || ' | ' ||
RPAD(TO_CHAR(pTgt), 8) || ' | ' ||
CASE WHEN pSrc = pTgt THEN 'OK' ELSE 'MISMATCH' END);
END;
BEGIN
-- Source (direct - if table does not exist, error propagates)
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT' INTO v1S;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER' INTO v2S;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM' INTO v3S;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ANNOUNCEMENT' INTO v4S;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_FBL_ITEM' INTO v5S;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED' INTO v6S;
vTotalS := v1S + v2S + v3S + v4S + v5S + v6S;
-- Target
safe_count('SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_ODS', v1T);
safe_count('SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_MODIFICATION_HEADER_ODS', v2T);
safe_count('SELECT COUNT(*) FROM ODS.TOP_ALLOTMENT_MODIFICATION_ITEM_ODS', v3T);
safe_count('SELECT COUNT(*) FROM ODS.TOP_ANNOUNCEMENT_ODS', v4T);
safe_count('SELECT COUNT(*) FROM ODS.TOP_FULLBIDLIST_ITEM_ODS', v5T);
safe_count('SELECT COUNT(*) FROM ODS.TOP_FULLBID_ARRAY_COMPILED_ODS', v6T);
vTotalT := v1T + v2T + v3T + v4T + v5T + v6T;
DBMS_OUTPUT.PUT_LINE('POST-EXPORT VERIFICATION SUMMARY');
DBMS_OUTPUT.PUT_LINE(RPAD('Table', 40) || ' | Source | Target | Match');
DBMS_OUTPUT.PUT_LINE(RPAD('-', 75, '-'));
print_row('ALLOTMENT', v1S, v1T); IF v1S != v1T THEN vMismatch := vMismatch + 1; END IF;
print_row('ALLOTMENT_MODIFICATION_HEADER', v2S, v2T); IF v2S != v2T THEN vMismatch := vMismatch + 1; END IF;
print_row('ALLOTMENT_MODIFICATION_ITEM', v3S, v3T); IF v3S != v3T THEN vMismatch := vMismatch + 1; END IF;
print_row('ANNOUNCEMENT', v4S, v4T); IF v4S != v4T THEN vMismatch := vMismatch + 1; END IF;
print_row('FBL_ITEM (->FULLBIDLIST_ITEM)', v5S, v5T); IF v5S != v5T THEN vMismatch := vMismatch + 1; END IF;
print_row('FULLBID_ARRAY_COMPILED', v6S, v6T); IF v6S != v6T THEN vMismatch := vMismatch + 1; END IF;
DBMS_OUTPUT.PUT_LINE(RPAD('-', 75, '-'));
print_row('TOTAL', vTotalS, vTotalT);
DBMS_OUTPUT.PUT_LINE('');
IF vMismatch = 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All record counts match - export verified');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: ' || vMismatch || ' table(s) have record count mismatches');
DBMS_OUTPUT.PUT_LINE(' Please review export logs and external table access permissions');
END IF;
END;
/
-- Log export completion
INSERT INTO CT_MRDS.A_PROCESS_LOG (PROCESS_NAME, PROCEDURE_NAME, LOG_LEVEL, LOG_MESSAGE, PROCEDURE_PARAMETERS)
VALUES ('MARS-1005', 'EXPORT_TOP_DATA', 'INFO', 'Historical OU_TOP data export completed',
'Check verification scripts for detailed results');
COMMIT;
PROMPT
PROMPT =====================================================================================
PROMPT MARS-1005 OU_TOP Historical Data Export - COMPLETED
PROMPT
PROMPT Next steps:
PROMPT 1. Run: @02_MARS_1005_verify_exports.sql (verify file registration)
PROMPT 2. Run: @03_MARS_1005_verify_data_integrity.sql (full data verification)
PROMPT =====================================================================================

View File

@@ -0,0 +1,215 @@
-- ===================================================================
-- MARS-1005 Verify Exports: Check Export Results and File Creation
-- ===================================================================
-- Purpose: Verify that OU_TOP historical data export completed successfully
-- Author: Grzegorz Michalski
-- Date: 2026-03-06
-- MARS Issue: MARS-1005
-- Tables: 6 OU_TOP.LEGACY_* tables exported to ODS/TOP/ bucket paths
SET SERVEROUTPUT ON SIZE UNLIMITED
SET TIMING ON
PROMPT =========================================================================
PROMPT MARS-1005 Export Verification
PROMPT =========================================================================
-- Check 1: Verify files were registered in A_SOURCE_FILE_RECEIVED
PROMPT Checking export file registration (PROCESS_NAME = MARS-1005)...
DECLARE
vFileCount NUMBER := 0;
vTotalBytes NUMBER := 0;
BEGIN
SELECT COUNT(*), NVL(SUM(BYTES), 0)
INTO vFileCount, vTotalBytes
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 1/24; -- Last hour
DBMS_OUTPUT.PUT_LINE('Registered export files (last hour): ' || vFileCount);
DBMS_OUTPUT.PUT_LINE('Total file size: ' || ROUND(vTotalBytes / 1024, 2) || ' KB');
IF vFileCount = 0 THEN
DBMS_OUTPUT.PUT_LINE('WARNING: No export files found in registration');
ELSIF vFileCount < 6 THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Expected at least 6 files (1 per table), found: ' || vFileCount);
ELSE
DBMS_OUTPUT.PUT_LINE('SUCCESS: All expected export files registered (>= 6)');
END IF;
END;
/
-- Check 2: Show recent export registrations by table
PROMPT Recent export file registrations per table:
SELECT
A_SOURCE_FILE_CONFIG_KEY AS CONFIG_KEY,
SUBSTR(SOURCE_FILE_NAME, 1, 55) AS FILE_NAME,
PROCESSING_STATUS,
ROUND(NVL(BYTES, 0) / 1024, 2) AS SIZE_KB,
TO_CHAR(RECEPTION_DATE, 'HH24:MI:SS') AS TIME_EXPORTED
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 1/24
ORDER BY A_SOURCE_FILE_CONFIG_KEY, RECEPTION_DATE DESC;
-- Check 2b: File count per A_SOURCE_FILE_CONFIG_KEY
PROMPT Export file count per source config key:
SELECT
r.A_SOURCE_FILE_CONFIG_KEY,
c.TABLE_ID,
COUNT(*) AS FILE_COUNT,
ROUND(NVL(SUM(r.BYTES), 0) / 1024, 2) AS TOTAL_KB
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED r
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG c
ON r.A_SOURCE_FILE_CONFIG_KEY = c.A_SOURCE_FILE_CONFIG_KEY
WHERE r.PROCESS_NAME = 'MARS-1005'
AND r.RECEPTION_DATE >= SYSDATE - 1/24
GROUP BY r.A_SOURCE_FILE_CONFIG_KEY, c.TABLE_ID
ORDER BY r.A_SOURCE_FILE_CONFIG_KEY;
-- Check 3: Verify export process logs
PROMPT Checking export process logs...
DECLARE
vLogCount NUMBER := 0;
vErrorCount NUMBER := 0;
BEGIN
SELECT COUNT(*), SUM(CASE WHEN LOG_LEVEL = 'ERROR' THEN 1 ELSE 0 END)
INTO vLogCount, vErrorCount
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005'
AND LOG_TIMESTAMP >= SYSTIMESTAMP - INTERVAL '1' HOUR;
DBMS_OUTPUT.PUT_LINE('SUCCESS: Process log entries: ' || vLogCount);
DBMS_OUTPUT.PUT_LINE('SUCCESS: Error entries: ' || vErrorCount);
IF vErrorCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('WARNING: ' || vErrorCount || ' errors found in process log');
ELSE
DBMS_OUTPUT.PUT_LINE('SUCCESS: No errors found in process log');
END IF;
END;
/
-- Check 4: Display recent process logs
PROMPT Recent MARS-1005 process logs:
SELECT
TO_CHAR(LOG_TIMESTAMP, 'HH24:MI:SS') AS TIME,
PROCEDURE_NAME,
LOG_LEVEL,
SUBSTR(LOG_MESSAGE, 1, 60) AS MESSAGE
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005'
AND LOG_TIMESTAMP >= SYSTIMESTAMP - INTERVAL '1' HOUR
ORDER BY LOG_TIMESTAMP DESC
FETCH FIRST 10 ROWS ONLY;
-- Check 5: Cloud bucket file verification across all 6 TOP folders
PROMPT Checking cloud bucket files in ODS/TOP/ paths...
DECLARE
vCredentialName VARCHAR2(100) := 'OCI$RESOURCE_PRINCIPAL';
vDataBucketUri VARCHAR2(500);
vTotalFiles NUMBER := 0;
TYPE t_folder IS TABLE OF VARCHAR2(200);
vFolders t_folder := t_folder(
'ODS/TOP/TOP_ALLOTMENT/',
'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER/',
'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM/',
'ODS/TOP/TOP_ANNOUNCEMENT/',
'ODS/TOP/TOP_FULLBIDLIST_ITEM/',
'ODS/TOP/TOP_FULLBID_ARRAY_COMPILED/'
);
vFolderFiles NUMBER;
vFolderSize NUMBER;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Bucket URI: ' || vDataBucketUri);
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(RPAD('Folder', 55) || RPAD('Files', 8) || 'Total KB');
DBMS_OUTPUT.PUT_LINE(RPAD('-', 75, '-'));
FOR i IN 1..vFolders.COUNT LOOP
vFolderFiles := 0;
vFolderSize := 0;
BEGIN
SELECT COUNT(*), NVL(SUM(bytes), 0)
INTO vFolderFiles, vFolderSize
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => vCredentialName,
location_uri => vDataBucketUri || vFolders(i)
))
WHERE object_name NOT LIKE '%/';
EXCEPTION WHEN OTHERS THEN NULL;
END;
DBMS_OUTPUT.PUT_LINE(
RPAD(vFolders(i), 55) ||
RPAD(TO_CHAR(vFolderFiles), 8) ||
ROUND(vFolderSize / 1024, 2) || ' KB'
);
vTotalFiles := vTotalFiles + vFolderFiles;
END LOOP;
DBMS_OUTPUT.PUT_LINE(RPAD('-', 75, '-'));
DBMS_OUTPUT.PUT_LINE('Total files across all TOP folders: ' || vTotalFiles);
IF vTotalFiles = 0 THEN
DBMS_OUTPUT.PUT_LINE('WARNING: No files found in any TOP folder');
ELSE
DBMS_OUTPUT.PUT_LINE('SUCCESS: Files present in ODS/TOP/ bucket paths');
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Cannot access cloud bucket: ' || SQLERRM);
END;
/
PROMPT
PROMPT =========================================================================
PROMPT MARS-1005 Export Verification Summary
PROMPT =========================================================================
DECLARE
vFileRegCount NUMBER := 0;
vLogErrorCount NUMBER := 0;
vOverallStatus VARCHAR2(20);
BEGIN
SELECT COUNT(*)
INTO vFileRegCount
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 1/24;
SELECT COUNT(*)
INTO vLogErrorCount
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005'
AND LOG_LEVEL = 'ERROR'
AND LOG_TIMESTAMP >= SYSTIMESTAMP - INTERVAL '1' HOUR;
IF vFileRegCount >= 6 AND vLogErrorCount = 0 THEN
vOverallStatus := 'SUCCESS';
ELSIF vFileRegCount > 0 AND vLogErrorCount = 0 THEN
vOverallStatus := 'PARTIAL SUCCESS';
ELSE
vOverallStatus := 'ISSUES DETECTED';
END IF;
DBMS_OUTPUT.PUT_LINE('MARS-1005 Export Verification: ' || vOverallStatus);
DBMS_OUTPUT.PUT_LINE('- Registered files (last hour): ' || vFileRegCount || ' (expected: >= 6, one per table)');
DBMS_OUTPUT.PUT_LINE('- Process errors: ' || vLogErrorCount);
IF vOverallStatus = 'SUCCESS' THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All validations passed - export successful');
ELSIF vOverallStatus = 'PARTIAL SUCCESS' THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Some tables may have incomplete exports - review registrations above');
ELSE
DBMS_OUTPUT.PUT_LINE('ISSUES DETECTED: Review process logs and bucket contents above');
END IF;
END;
/
PROMPT =========================================================================
PROMPT Export Verification Completed
PROMPT =========================================================================

View File

@@ -0,0 +1,359 @@
-- ===================================================================
-- MARS-1005 Verify Data Integrity: Source vs Exported Data Validation
-- ===================================================================
-- Purpose: Verify data integrity between 6 OU_TOP.LEGACY_* source tables
-- and corresponding ODS external tables after export
-- Author: Grzegorz Michalski
-- Date: 2026-03-06
-- MARS Issue: MARS-1005
SET SERVEROUTPUT ON SIZE UNLIMITED
SET TIMING ON
PROMPT =========================================================================
PROMPT MARS-1005 Data Integrity Verification
PROMPT =========================================================================
-- Check 1: Source table record counts
PROMPT Checking source table record counts (OU_TOP.LEGACY_* tables)...
DECLARE
v1Rows NUMBER := 0; v2Rows NUMBER := 0; v3Rows NUMBER := 0;
v4Rows NUMBER := 0; v5Rows NUMBER := 0; v6Rows NUMBER := 0;
vTotalRows NUMBER := 0;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT' INTO v1Rows;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER' INTO v2Rows;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM' INTO v3Rows;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_ANNOUNCEMENT' INTO v4Rows;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_FBL_ITEM' INTO v5Rows;
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED' INTO v6Rows;
vTotalRows := v1Rows + v2Rows + v3Rows + v4Rows + v5Rows + v6Rows;
DBMS_OUTPUT.PUT_LINE('Source table record counts:');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT : ' || v1Rows || ' records');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT_MODIFICATION_HEADER: ' || v2Rows || ' records');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT_MODIFICATION_ITEM : ' || v3Rows || ' records');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ANNOUNCEMENT : ' || v4Rows || ' records');
DBMS_OUTPUT.PUT_LINE('- LEGACY_FBL_ITEM : ' || v5Rows || ' records');
DBMS_OUTPUT.PUT_LINE('- LEGACY_FULLBID_ARRAY_COMPILED : ' || v6Rows || ' records');
DBMS_OUTPUT.PUT_LINE('- TOTAL : ' || vTotalRows || ' records');
IF vTotalRows > 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All source tables contain data');
ELSE
DBMS_OUTPUT.PUT_LINE('ERROR: No data found in source tables');
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR: Cannot access source tables: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('NOTE: Ensure SELECT privilege on OU_TOP.LEGACY_* is granted to CT_MRDS');
END;
/
-- Check 2: A_ETL_LOAD_SET_FK distribution across source tables
PROMPT Checking A_ETL_LOAD_SET_FK distribution...
DECLARE
v1Keys NUMBER := 0; v2Keys NUMBER := 0; v3Keys NUMBER := 0;
v4Keys NUMBER := 0; v5Keys NUMBER := 0; v6Keys NUMBER := 0;
vDistinctAllKeys NUMBER := 0;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ALLOTMENT' INTO v1Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER' INTO v2Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM' INTO v3Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ANNOUNCEMENT' INTO v4Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_FBL_ITEM' INTO v5Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED' INTO v6Keys;
SELECT COUNT(DISTINCT wk)
INTO vDistinctAllKeys
FROM (
SELECT A_ETL_LOAD_SET_FK AS wk FROM OU_TOP.LEGACY_ALLOTMENT UNION ALL
SELECT A_ETL_LOAD_SET_FK FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER UNION ALL
SELECT A_ETL_LOAD_SET_FK FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM UNION ALL
SELECT A_ETL_LOAD_SET_FK FROM OU_TOP.LEGACY_ANNOUNCEMENT UNION ALL
SELECT A_ETL_LOAD_SET_FK FROM OU_TOP.LEGACY_FBL_ITEM UNION ALL
SELECT A_ETL_LOAD_SET_FK FROM OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED
);
DBMS_OUTPUT.PUT_LINE('Distinct A_ETL_LOAD_SET_FK values per table (source):');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT : ' || v1Keys);
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT_MODIFICATION_HEADER: ' || v2Keys);
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT_MODIFICATION_ITEM : ' || v3Keys);
DBMS_OUTPUT.PUT_LINE('- LEGACY_ANNOUNCEMENT : ' || v4Keys);
DBMS_OUTPUT.PUT_LINE('- LEGACY_FBL_ITEM : ' || v5Keys);
DBMS_OUTPUT.PUT_LINE('- LEGACY_FULLBID_ARRAY_COMPILED : ' || v6Keys);
DBMS_OUTPUT.PUT_LINE('- Total distinct ETL load keys (all tables): ' || vDistinctAllKeys);
IF vDistinctAllKeys > 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: ETL load key distribution looks normal');
ELSE
DBMS_OUTPUT.PUT_LINE('ERROR: No ETL load keys found in source data');
END IF;
END;
/
-- Check 3: Template table compatibility verification
PROMPT Checking template table compatibility (CT_ET_TEMPLATES.TOP_*)...
DECLARE
vCols1 NUMBER := 0; vCols2 NUMBER := 0; vCols3 NUMBER := 0;
vCols4 NUMBER := 0; vCols5 NUMBER := 0; vCols6 NUMBER := 0;
BEGIN
SELECT COUNT(*) INTO vCols1
FROM all_tab_columns WHERE owner = 'CT_ET_TEMPLATES' AND table_name = 'TOP_ALLOTMENT';
SELECT COUNT(*) INTO vCols2
FROM all_tab_columns WHERE owner = 'CT_ET_TEMPLATES' AND table_name = 'TOP_ALLOTMENT_MODIFICATION_HEADER';
SELECT COUNT(*) INTO vCols3
FROM all_tab_columns WHERE owner = 'CT_ET_TEMPLATES' AND table_name = 'TOP_ALLOTMENT_MODIFICATION_ITEM';
SELECT COUNT(*) INTO vCols4
FROM all_tab_columns WHERE owner = 'CT_ET_TEMPLATES' AND table_name = 'TOP_ANNOUNCEMENT';
SELECT COUNT(*) INTO vCols5
FROM all_tab_columns WHERE owner = 'CT_ET_TEMPLATES' AND table_name = 'TOP_FULLBIDLIST_ITEM';
SELECT COUNT(*) INTO vCols6
FROM all_tab_columns WHERE owner = 'CT_ET_TEMPLATES' AND table_name = 'TOP_FULLBID_ARRAY_COMPILED';
DBMS_OUTPUT.PUT_LINE('Template table column counts:');
DBMS_OUTPUT.PUT_LINE('- CT_ET_TEMPLATES.TOP_ALLOTMENT : ' || vCols1 || ' columns');
DBMS_OUTPUT.PUT_LINE('- CT_ET_TEMPLATES.TOP_ALLOTMENT_MODIFICATION_HEADER: ' || vCols2 || ' columns');
DBMS_OUTPUT.PUT_LINE('- CT_ET_TEMPLATES.TOP_ALLOTMENT_MODIFICATION_ITEM : ' || vCols3 || ' columns');
DBMS_OUTPUT.PUT_LINE('- CT_ET_TEMPLATES.TOP_ANNOUNCEMENT : ' || vCols4 || ' columns');
DBMS_OUTPUT.PUT_LINE('- CT_ET_TEMPLATES.TOP_FULLBIDLIST_ITEM : ' || vCols5 || ' columns');
DBMS_OUTPUT.PUT_LINE('- CT_ET_TEMPLATES.TOP_FULLBID_ARRAY_COMPILED : ' || vCols6 || ' columns');
IF vCols1 > 0 AND vCols2 > 0 AND vCols3 > 0 AND vCols4 > 0 AND vCols5 > 0 AND vCols6 > 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All 6 template tables have defined structure');
ELSE
DBMS_OUTPUT.PUT_LINE('ERROR: One or more template tables missing columns');
END IF;
END;
/
-- Check 4: Verify A_SOURCE_FILE_CONFIG entries for 6 TOP tables
PROMPT Checking A_SOURCE_FILE_CONFIG registration for TOP tables...
DECLARE
vConfigCount NUMBER := 0;
BEGIN
SELECT COUNT(*)
INTO vConfigCount
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE A_SOURCE_FILE_CONFIG_KEY IN (705, 683, 684, 689, 696, 697); -- MARS-1005 config keys
DBMS_OUTPUT.PUT_LINE('A_SOURCE_FILE_CONFIG entries for MARS-1005 tables: ' || vConfigCount || ' (expected: 6)');
DBMS_OUTPUT.PUT_LINE('Config keys: 705(ALLOTMENT), 683(MOD_HDR), 684(MOD_ITEM), 689(ANNOUNCEMENT), 696(FBL_ITEM), 697(FBA_COMPILED)');
IF vConfigCount = 6 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All 6 source file config entries confirmed');
ELSE
DBMS_OUTPUT.PUT_LINE('ERROR: Missing config entries (' || (6 - vConfigCount) || ' missing)');
END IF;
END;
/
PROMPT A_SOURCE_FILE_CONFIG details for TOP tables:
SELECT
A_SOURCE_FILE_CONFIG_KEY,
TABLE_ID,
TEMPLATE_TABLE_NAME,
SUBSTR(SOURCE_FILE_NAME_PATTERN, 1, 40) AS FILE_PATTERN
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE A_SOURCE_FILE_CONFIG_KEY IN (705, 683, 684, 689, 696, 697)
ORDER BY A_SOURCE_FILE_CONFIG_KEY;
PROMPT =====================================================================================
PROMPT MARS-1005 Record Count Verification
PROMPT =====================================================================================
PROMPT Comparing source table counts with exported external table counts
PROMPT =====================================================================================
DECLARE
TYPE t_table_info IS RECORD (
source_schema VARCHAR2(50),
source_table VARCHAR2(100),
external_table VARCHAR2(100),
description VARCHAR2(200)
);
TYPE t_table_list IS TABLE OF t_table_info;
vTables t_table_list;
vSourceCount NUMBER;
vTargetCount NUMBER;
vTotalSourceCount NUMBER := 0;
vTotalTargetCount NUMBER := 0;
vMismatchCount NUMBER := 0;
vSql VARCHAR2(4000);
vFileCount NUMBER := 0;
vValidationResult VARCHAR2(100);
BEGIN
DBMS_OUTPUT.PUT_LINE('VERIFICATION TIME: ' || TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS'));
DBMS_OUTPUT.PUT_LINE('');
-- Initialize table list with 6 OU_TOP LEGACY table configuration
vTables := t_table_list(
t_table_info('OU_TOP', 'LEGACY_ALLOTMENT', 'ODS.TOP_ALLOTMENT_ODS', 'ALLOTMENT data (A_SOURCE_FILE_CONFIG_KEY=705)'),
t_table_info('OU_TOP', 'LEGACY_ALLOTMENT_MODIFICATION_HEADER', 'ODS.TOP_ALLOTMENT_MODIFICATION_HEADER_ODS', 'MOD HEADER data (A_SOURCE_FILE_CONFIG_KEY=683)'),
t_table_info('OU_TOP', 'LEGACY_ALLOTMENT_MODIFICATION_ITEM', 'ODS.TOP_ALLOTMENT_MODIFICATION_ITEM_ODS', 'MOD ITEM data (A_SOURCE_FILE_CONFIG_KEY=684)'),
t_table_info('OU_TOP', 'LEGACY_ANNOUNCEMENT', 'ODS.TOP_ANNOUNCEMENT_ODS', 'ANNOUNCEMENT data (A_SOURCE_FILE_CONFIG_KEY=689)'),
t_table_info('OU_TOP', 'LEGACY_FBL_ITEM', 'ODS.TOP_FULLBIDLIST_ITEM_ODS', 'FBL ITEM data (A_SOURCE_FILE_CONFIG_KEY=696)'),
t_table_info('OU_TOP', 'LEGACY_FULLBID_ARRAY_COMPILED', 'ODS.TOP_FULLBID_ARRAY_COMPILED_ODS', 'FBA COMPILED data (A_SOURCE_FILE_CONFIG_KEY=697)')
);
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('Table Name Source Count Target Count Status');
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
FOR i IN 1..vTables.COUNT LOOP
-- Get source table count
vSql := 'SELECT COUNT(*) FROM ' || vTables(i).source_schema || '.' || vTables(i).source_table;
BEGIN
EXECUTE IMMEDIATE vSql INTO vSourceCount;
vTotalSourceCount := vTotalSourceCount + vSourceCount;
EXCEPTION
WHEN OTHERS THEN
vSourceCount := -1;
DBMS_OUTPUT.PUT_LINE(RPAD(vTables(i).source_table, 24) || 'ERROR: Cannot access source table');
CONTINUE;
END;
-- Get target external table count
vSql := 'SELECT COUNT(*) FROM ' || vTables(i).external_table;
BEGIN
EXECUTE IMMEDIATE vSql INTO vTargetCount;
vTotalTargetCount := vTotalTargetCount + vTargetCount;
EXCEPTION
WHEN OTHERS THEN
-- Handle expected errors for empty external tables
-- ORA-29913: error in executing ODCIEXTTABLEOPEN callout
-- ORA-29400: data cartridge error
-- KUP-13023: nothing matched wildcard query (no files in bucket)
-- NOTE: ORA-30653 (reject limit) is a real data quality error, not treated as empty
IF vSourceCount = 0 OR SQLCODE IN (-29913, -29400) OR SQLERRM LIKE '%KUP-13023%' THEN
vTargetCount := 0; -- Treat as empty (no files exported yet)
ELSE
vTargetCount := -1; -- Real error
END IF;
END;
-- Display comparison results with thousands separators
DECLARE
vStatus VARCHAR2(20);
vSourceDisplay VARCHAR2(17);
vTargetDisplay VARCHAR2(17);
BEGIN
-- Format source count display
IF vSourceCount = -1 THEN
vSourceDisplay := 'ERROR';
ELSE
vSourceDisplay := TO_CHAR(vSourceCount, '9,999,999,999');
END IF;
-- Format target count display
IF vTargetCount = -1 THEN
vTargetDisplay := 'ERROR';
ELSE
vTargetDisplay := TO_CHAR(vTargetCount, '9,999,999,999');
END IF;
-- Determine status
IF vSourceCount = vTargetCount THEN
vStatus := 'PASS';
ELSIF vTargetCount = -1 THEN
vStatus := 'ERROR';
vMismatchCount := vMismatchCount + 1;
ELSIF vSourceCount = -1 THEN
vStatus := 'ERROR';
vMismatchCount := vMismatchCount + 1;
ELSE
vStatus := 'MISMATCH';
vMismatchCount := vMismatchCount + 1;
END IF;
DBMS_OUTPUT.PUT_LINE(
RPAD(vTables(i).source_table, 24) ||
LPAD(vSourceDisplay, 15) ||
LPAD(vTargetDisplay, 15) || ' ' ||
vStatus
);
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE(
RPAD('TOTALS', 24) ||
LPAD(TO_CHAR(vTotalSourceCount, '9,999,999,999'), 15) ||
LPAD(TO_CHAR(vTotalTargetCount, '9,999,999,999'), 15)
);
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('');
-- Count MARS-1005 registered export files
SELECT COUNT(*)
INTO vFileCount
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 1/24;
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Record Count Verification Summary');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Total source records: ' || TO_CHAR(vTotalSourceCount, '9,999,999,999'));
DBMS_OUTPUT.PUT_LINE('Total target records: ' || TO_CHAR(vTotalTargetCount, '9,999,999,999') || ' (exported to ODS)');
DBMS_OUTPUT.PUT_LINE('Export files registered (PROCESS_NAME=MARS-1005): ' || vFileCount);
DBMS_OUTPUT.PUT_LINE('');
IF vMismatchCount = 0 AND vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('[PASS] VERIFICATION PASSED');
DBMS_OUTPUT.PUT_LINE(' All record counts match between source and exported data');
DBMS_OUTPUT.PUT_LINE(' Export completed successfully');
ELSIF vMismatchCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('[INFO] VERIFICATION COMPLETED WITH MISMATCHES');
DBMS_OUTPUT.PUT_LINE(' Found ' || vMismatchCount || ' table(s) with count mismatches');
DBMS_OUTPUT.PUT_LINE(' NOTE: Mismatches may be caused by pre-existing files in buckets (see pre-check)');
DBMS_OUTPUT.PUT_LINE(' Review export logs and pre-check results before re-running exports');
ELSE
DBMS_OUTPUT.PUT_LINE('[WARN] NO EXPORT DETECTED');
DBMS_OUTPUT.PUT_LINE(' No files found in export registration');
DBMS_OUTPUT.PUT_LINE(' Verify export execution completed successfully');
END IF;
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Legend:');
DBMS_OUTPUT.PUT_LINE(' PASS - Record counts match (export successful)');
DBMS_OUTPUT.PUT_LINE(' MISMATCH - Record counts differ (may be pre-existing files or export issue)');
DBMS_OUTPUT.PUT_LINE(' Check pre-check results to identify pre-existing files');
DBMS_OUTPUT.PUT_LINE(' ERROR - Cannot access table (verify table exists and permissions)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
-- Workflow Key Analysis
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('ETL Load Key Analysis (distinct A_ETL_LOAD_SET_FK per source table):');
DECLARE
v1Keys NUMBER; v2Keys NUMBER; v3Keys NUMBER;
v4Keys NUMBER; v5Keys NUMBER; v6Keys NUMBER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ALLOTMENT' INTO v1Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER' INTO v2Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM' INTO v3Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_ANNOUNCEMENT' INTO v4Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_FBL_ITEM' INTO v5Keys;
EXECUTE IMMEDIATE 'SELECT COUNT(DISTINCT A_ETL_LOAD_SET_FK) FROM OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED' INTO v6Keys;
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT : ' || v1Keys || ' distinct keys');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT_MODIFICATION_HEADER: ' || v2Keys || ' distinct keys');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ALLOTMENT_MODIFICATION_ITEM : ' || v3Keys || ' distinct keys');
DBMS_OUTPUT.PUT_LINE('- LEGACY_ANNOUNCEMENT : ' || v4Keys || ' distinct keys');
DBMS_OUTPUT.PUT_LINE('- LEGACY_FBL_ITEM : ' || v5Keys || ' distinct keys');
DBMS_OUTPUT.PUT_LINE('- LEGACY_FULLBID_ARRAY_COMPILED : ' || v6Keys || ' distinct keys');
DBMS_OUTPUT.PUT_LINE('- Actual export files registered: ' || vFileCount);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Cannot query ETL load keys: ' || SQLERRM);
END;
END;
/
PROMPT =========================================================================
PROMPT Data Integrity Verification Completed
PROMPT =========================================================================

View File

@@ -0,0 +1,204 @@
--=============================================================================================================================
-- MARS-1005 ROLLBACK: Delete Exported CSV Files from DATA Bucket
--=============================================================================================================================
-- Purpose: Delete exported CSV files from ODS/TOP/ bucket folders for 6 OU_TOP LEGACY tables
-- WARNING: This will permanently delete exported data files!
-- Author: Grzegorz Michalski
-- Date: 2026-03-06
-- Related: MARS-1005 - OU_TOP Historical Data Export Rollback
--=============================================================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT ========================================================================
PROMPT ROLLBACK: Deleting OU_TOP LEGACY exported files from DATA Bucket
PROMPT ========================================================================
PROMPT WARNING: This will delete files registered with PROCESS_NAME = 'MARS-1005'
PROMPT from "ODS/TOP/*" paths in the DATA bucket.
PROMPT ========================================================================
-- Helper: generic delete procedure for one TOP table folder
-- Deletes objects whose SOURCE_FILE_NAME matches the folder prefix pattern
-- ROLLBACK TABLE 1/6: LEGACY_ALLOTMENT -> ODS/TOP/TOP_ALLOTMENT/
PROMPT ROLLBACK: Deleting TOP_ALLOTMENT files...
DECLARE
vDataBucketUri VARCHAR2(500);
vFolderPath VARCHAR2(200) := 'ODS/TOP/TOP_ALLOTMENT/';
vFileCount NUMBER := 0;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Deleting TOP_ALLOTMENT files registered by MARS-1005...');
-- Only delete files registered by MARS-1005 (safe - does not touch pre-existing files)
FOR rec IN (
SELECT SOURCE_FILE_NAME
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND SOURCE_FILE_NAME LIKE '%LEGACY_ALLOTMENT_%'
AND SOURCE_FILE_NAME NOT LIKE '%MODIFICATION%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
credential_name => 'OCI$RESOURCE_PRINCIPAL',
object_uri => vDataBucketUri || vFolderPath || rec.SOURCE_FILE_NAME
);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.SOURCE_FILE_NAME);
vFileCount := vFileCount + 1;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -20404 THEN
DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.SOURCE_FILE_NAME);
ELSE RAISE;
END IF;
END;
END LOOP;
IF vFileCount = 0 THEN
DBMS_OUTPUT.PUT_LINE(' INFO: No files found to delete');
END IF;
DBMS_OUTPUT.PUT_LINE('SUCCESS: TOP_ALLOTMENT files deleted (' || vFileCount || ' file(s))');
END;
/
-- ROLLBACK TABLE 2/6: LEGACY_ALLOTMENT_MODIFICATION_HEADER -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER/
PROMPT ROLLBACK: Deleting TOP_ALLOTMENT_MODIFICATION_HEADER files...
DECLARE
vDataBucketUri VARCHAR2(500);
vFolderPath VARCHAR2(200) := 'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER/';
vFileCount NUMBER := 0;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Deleting TOP_ALLOTMENT_MODIFICATION_HEADER files registered by MARS-1005...');
FOR rec IN (
SELECT SOURCE_FILE_NAME FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND SOURCE_FILE_NAME LIKE '%LEGACY_ALLOTMENT_MODIFICATION_HEADER%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(credential_name => 'OCI$RESOURCE_PRINCIPAL', object_uri => vDataBucketUri || vFolderPath || rec.SOURCE_FILE_NAME);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.SOURCE_FILE_NAME);
vFileCount := vFileCount + 1;
EXCEPTION WHEN OTHERS THEN
IF SQLCODE = -20404 THEN DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.SOURCE_FILE_NAME);
ELSE RAISE; END IF;
END;
END LOOP;
IF vFileCount = 0 THEN DBMS_OUTPUT.PUT_LINE(' INFO: No files found to delete'); END IF;
DBMS_OUTPUT.PUT_LINE('SUCCESS: TOP_ALLOTMENT_MODIFICATION_HEADER files deleted (' || vFileCount || ' file(s))');
END;
/
-- ROLLBACK TABLE 3/6: LEGACY_ALLOTMENT_MODIFICATION_ITEM -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM/
PROMPT ROLLBACK: Deleting TOP_ALLOTMENT_MODIFICATION_ITEM files...
DECLARE
vDataBucketUri VARCHAR2(500);
vFolderPath VARCHAR2(200) := 'ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM/';
vFileCount NUMBER := 0;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Deleting TOP_ALLOTMENT_MODIFICATION_ITEM files registered by MARS-1005...');
FOR rec IN (
SELECT SOURCE_FILE_NAME FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND SOURCE_FILE_NAME LIKE '%LEGACY_ALLOTMENT_MODIFICATION_ITEM%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(credential_name => 'OCI$RESOURCE_PRINCIPAL', object_uri => vDataBucketUri || vFolderPath || rec.SOURCE_FILE_NAME);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.SOURCE_FILE_NAME);
vFileCount := vFileCount + 1;
EXCEPTION WHEN OTHERS THEN
IF SQLCODE = -20404 THEN DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.SOURCE_FILE_NAME);
ELSE RAISE; END IF;
END;
END LOOP;
IF vFileCount = 0 THEN DBMS_OUTPUT.PUT_LINE(' INFO: No files found to delete'); END IF;
DBMS_OUTPUT.PUT_LINE('SUCCESS: TOP_ALLOTMENT_MODIFICATION_ITEM files deleted (' || vFileCount || ' file(s))');
END;
/
-- ROLLBACK TABLE 4/6: LEGACY_ANNOUNCEMENT -> ODS/TOP/TOP_ANNOUNCEMENT/
PROMPT ROLLBACK: Deleting TOP_ANNOUNCEMENT files...
DECLARE
vDataBucketUri VARCHAR2(500);
vFolderPath VARCHAR2(200) := 'ODS/TOP/TOP_ANNOUNCEMENT/';
vFileCount NUMBER := 0;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Deleting TOP_ANNOUNCEMENT files registered by MARS-1005...');
FOR rec IN (
SELECT SOURCE_FILE_NAME FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND SOURCE_FILE_NAME LIKE '%LEGACY_ANNOUNCEMENT%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(credential_name => 'OCI$RESOURCE_PRINCIPAL', object_uri => vDataBucketUri || vFolderPath || rec.SOURCE_FILE_NAME);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.SOURCE_FILE_NAME);
vFileCount := vFileCount + 1;
EXCEPTION WHEN OTHERS THEN
IF SQLCODE = -20404 THEN DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.SOURCE_FILE_NAME);
ELSE RAISE; END IF;
END;
END LOOP;
IF vFileCount = 0 THEN DBMS_OUTPUT.PUT_LINE(' INFO: No files found to delete'); END IF;
DBMS_OUTPUT.PUT_LINE('SUCCESS: TOP_ANNOUNCEMENT files deleted (' || vFileCount || ' file(s))');
END;
/
-- ROLLBACK TABLE 5/6: LEGACY_FBL_ITEM -> ODS/TOP/TOP_FULLBIDLIST_ITEM/
PROMPT ROLLBACK: Deleting TOP_FULLBIDLIST_ITEM files...
DECLARE
vDataBucketUri VARCHAR2(500);
vFolderPath VARCHAR2(200) := 'ODS/TOP/TOP_FULLBIDLIST_ITEM/';
vFileCount NUMBER := 0;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Deleting TOP_FULLBIDLIST_ITEM files registered by MARS-1005...');
FOR rec IN (
SELECT SOURCE_FILE_NAME FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND SOURCE_FILE_NAME LIKE '%LEGACY_FBL_ITEM%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(credential_name => 'OCI$RESOURCE_PRINCIPAL', object_uri => vDataBucketUri || vFolderPath || rec.SOURCE_FILE_NAME);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.SOURCE_FILE_NAME);
vFileCount := vFileCount + 1;
EXCEPTION WHEN OTHERS THEN
IF SQLCODE = -20404 THEN DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.SOURCE_FILE_NAME);
ELSE RAISE; END IF;
END;
END LOOP;
IF vFileCount = 0 THEN DBMS_OUTPUT.PUT_LINE(' INFO: No files found to delete'); END IF;
DBMS_OUTPUT.PUT_LINE('SUCCESS: TOP_FULLBIDLIST_ITEM files deleted (' || vFileCount || ' file(s))');
END;
/
-- ROLLBACK TABLE 6/6: LEGACY_FULLBID_ARRAY_COMPILED -> ODS/TOP/TOP_FULLBID_ARRAY_COMPILED/
PROMPT ROLLBACK: Deleting TOP_FULLBID_ARRAY_COMPILED files...
DECLARE
vDataBucketUri VARCHAR2(500);
vFolderPath VARCHAR2(200) := 'ODS/TOP/TOP_FULLBID_ARRAY_COMPILED/';
vFileCount NUMBER := 0;
BEGIN
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
DBMS_OUTPUT.PUT_LINE('Deleting TOP_FULLBID_ARRAY_COMPILED files registered by MARS-1005...');
FOR rec IN (
SELECT SOURCE_FILE_NAME FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND SOURCE_FILE_NAME LIKE '%LEGACY_FULLBID_ARRAY_COMPILED%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(credential_name => 'OCI$RESOURCE_PRINCIPAL', object_uri => vDataBucketUri || vFolderPath || rec.SOURCE_FILE_NAME);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.SOURCE_FILE_NAME);
vFileCount := vFileCount + 1;
EXCEPTION WHEN OTHERS THEN
IF SQLCODE = -20404 THEN DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.SOURCE_FILE_NAME);
ELSE RAISE; END IF;
END;
END LOOP;
IF vFileCount = 0 THEN DBMS_OUTPUT.PUT_LINE(' INFO: No files found to delete'); END IF;
DBMS_OUTPUT.PUT_LINE('SUCCESS: TOP_FULLBID_ARRAY_COMPILED files deleted (' || vFileCount || ' file(s))');
END;
/
PROMPT SUCCESS: All CSV file deletion operations completed

View File

@@ -0,0 +1,78 @@
-- ===================================================================
-- MARS-1005 Rollback Step 1: Delete File Registrations
-- ===================================================================
-- Purpose: Remove MARS-1005 export file registrations from A_SOURCE_FILE_RECEIVED
-- Author: Grzegorz Michalski
-- Date: 2026-02-12
SET SERVEROUTPUT ON SIZE UNLIMITED
SET TIMING ON
PROMPT =========================================================================
PROMPT MARS-1005 Rollback Step 1: Delete File Registrations
PROMPT =========================================================================
DECLARE
vFileCount NUMBER := 0;
vDeletedCount NUMBER := 0;
vErrorMsg VARCHAR2(4000);
BEGIN
-- Count files to be deleted (using PROCESS_NAME)
SELECT COUNT(*)
INTO vFileCount
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005';
DBMS_OUTPUT.PUT_LINE('Files to be deleted: ' || vFileCount);
DBMS_OUTPUT.PUT_LINE('Using PROCESS_NAME = ''MARS-1005'' filter');
IF vFileCount > 0 THEN
-- Show files before deletion
DBMS_OUTPUT.PUT_LINE('Files being removed:');
FOR rec IN (
SELECT A_SOURCE_FILE_RECEIVED_KEY,
SUBSTR(SOURCE_FILE_NAME, 1, 60) AS FILE_NAME,
TO_CHAR(RECEPTION_DATE, 'YYYY-MM-DD HH24:MI:SS') AS RECEIVED_TIME,
PROCESS_NAME
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
ORDER BY RECEPTION_DATE DESC
) LOOP
DBMS_OUTPUT.PUT_LINE('- ' || rec.FILE_NAME || ' (ID: ' || rec.A_SOURCE_FILE_RECEIVED_KEY || ', Process: ' || rec.PROCESS_NAME || ')');
END LOOP;
-- Delete the file registrations using PROCESS_NAME
DELETE FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005';
vDeletedCount := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('SUCCESS: Successfully deleted ' || vDeletedCount || ' file registrations');
-- Log the rollback action
INSERT INTO CT_MRDS.A_PROCESS_LOG (PROCESS_NAME, PROCEDURE_NAME, LOG_LEVEL, LOG_MESSAGE)
VALUES ('MARS-1005-ROLLBACK', 'DELETE_FILE_REGISTRATIONS', 'INFO',
'Deleted ' || vDeletedCount || ' file registrations');
COMMIT;
ELSE
DBMS_OUTPUT.PUT_LINE('SUCCESS: No file registrations found to delete');
END IF;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
vErrorMsg := 'Failed to delete file registrations: ' || SQLERRM;
DBMS_OUTPUT.PUT_LINE('ERROR: Error during file registration deletion: ' || SQLERRM);
-- Log the error
INSERT INTO CT_MRDS.A_PROCESS_LOG (PROCESS_NAME, PROCEDURE_NAME, LOG_LEVEL, LOG_MESSAGE)
VALUES ('MARS-1005-ROLLBACK', 'DELETE_FILE_REGISTRATIONS', 'ERROR', vErrorMsg);
COMMIT;
RAISE;
END;
/
PROMPT =========================================================================
PROMPT File Registration Rollback Completed
PROMPT =========================================================================

View File

@@ -0,0 +1,77 @@
-- ===================================================================
-- MARS-1005 Rollback Step 2: Clean Process Logs
-- ===================================================================
-- Purpose: Remove MARS-1005 process logs from A_PROCESS_LOG
-- Author: Grzegorz Michalski
-- Date: 2026-02-12
SET SERVEROUTPUT ON SIZE UNLIMITED
SET TIMING ON
PROMPT =========================================================================
PROMPT MARS-1005 Rollback Step 2: Clean Process Logs
PROMPT =========================================================================
DECLARE
vLogCount NUMBER := 0;
vDeletedCount NUMBER := 0;
vErrorMsg VARCHAR2(4000);
BEGIN
-- Count logs to be deleted
SELECT COUNT(*)
INTO vLogCount
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME IN ('MARS-1005', 'MARS-1005-ROLLBACK')
AND LOG_TIMESTAMP >= SYSDATE - 7; -- Last week (safety)
DBMS_OUTPUT.PUT_LINE('Process log entries to be deleted: ' || vLogCount);
IF vLogCount > 0 THEN
-- Show recent logs before deletion
DBMS_OUTPUT.PUT_LINE('Recent MARS-1005 log entries being removed:');
FOR rec IN (
SELECT A_PROCESS_LOG_KEY,
TO_CHAR(LOG_TIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') AS LOG_TIME,
PROCEDURE_NAME,
LOG_LEVEL,
SUBSTR(LOG_MESSAGE, 1, 40) AS MESSAGE
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME IN ('MARS-1005', 'MARS-1005-ROLLBACK')
AND LOG_TIMESTAMP >= SYSDATE - 7
ORDER BY LOG_TIMESTAMP DESC
FETCH FIRST 10 ROWS ONLY
) LOOP
DBMS_OUTPUT.PUT_LINE('- ' || rec.LOG_TIME || ' [' || rec.LOG_LEVEL || '] ' ||
rec.PROCEDURE_NAME || ': ' || rec.MESSAGE);
END LOOP;
-- Delete the process logs
DELETE FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME IN ('MARS-1005', 'MARS-1005-ROLLBACK')
AND LOG_TIMESTAMP >= SYSDATE - 7;
vDeletedCount := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('SUCCESS: Successfully deleted ' || vDeletedCount || ' process log entries');
ELSE
DBMS_OUTPUT.PUT_LINE('SUCCESS: No process log entries found to delete');
END IF;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
vErrorMsg := 'Failed to clean process logs: ' || SQLERRM;
DBMS_OUTPUT.PUT_LINE('ERROR: Error during process log cleanup: ' || SQLERRM);
-- Log the error (will remain after rollback for debugging)
INSERT INTO CT_MRDS.A_PROCESS_LOG (PROCESS_NAME, PROCEDURE_NAME, LOG_LEVEL, LOG_MESSAGE)
VALUES ('MARS-1005-ROLLBACK', 'CLEANUP_PROCESS_LOGS', 'ERROR', vErrorMsg);
COMMIT;
RAISE;
END;
/
PROMPT =========================================================================
PROMPT Process Log Cleanup Completed
PROMPT =========================================================================

View File

@@ -0,0 +1,207 @@
-- ===================================================================
-- MARS-1005 Rollback Verification: Confirm Rollback Completion
-- ===================================================================
-- Purpose: Verify that MARS-1005 rollback completed successfully
-- Author: Grzegorz Michalski
-- Date: 2026-02-12
SET SERVEROUTPUT ON SIZE UNLIMITED
SET TIMING ON
PROMPT =========================================================================
PROMPT MARS-1005 Rollback Verification
PROMPT =========================================================================
-- Check 1: Verify file registrations were removed
PROMPT Checking file registration cleanup...
DECLARE
vRemainingFiles NUMBER := 0;
BEGIN
SELECT COUNT(*)
INTO vRemainingFiles
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 7; -- Last week
IF vRemainingFiles = 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All MARS-1005 file registrations successfully removed');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: ' || vRemainingFiles || ' file registrations still exist');
-- Show remaining files
FOR rec IN (
SELECT SUBSTR(SOURCE_FILE_NAME, 1, 50) AS FILE_NAME,
TO_CHAR(RECEPTION_DATE, 'YYYY-MM-DD HH24:MI:SS') AS RECEIVED_TIME
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 7
) LOOP
DBMS_OUTPUT.PUT_LINE(' Remaining: ' || rec.FILE_NAME);
END LOOP;
END IF;
END;
/
-- Check 2: Verify process logs were cleaned
PROMPT Checking process log cleanup...
DECLARE
vRemainingLogs NUMBER := 0;
BEGIN
SELECT COUNT(*)
INTO vRemainingLogs
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005'
AND LOG_TIMESTAMP >= SYSDATE - 7; -- Last week
IF vRemainingLogs = 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: All MARS-1005 process logs successfully removed');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: ' || vRemainingLogs || ' process log entries still exist');
-- Show remaining logs (first few)
FOR rec IN (
SELECT TO_CHAR(LOG_TIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') AS LOG_TIME,
PROCEDURE_NAME,
SUBSTR(LOG_MESSAGE, 1, 40) AS MESSAGE
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005'
AND LOG_TIMESTAMP >= SYSDATE - 7
ORDER BY LOG_TIMESTAMP DESC
FETCH FIRST 3 ROWS ONLY
) LOOP
DBMS_OUTPUT.PUT_LINE(' Remaining: ' || rec.LOG_TIME || ' ' || rec.PROCEDURE_NAME);
END LOOP;
END IF;
END;
/
-- Check 3: Verify cloud bucket cleanup (informational only)
PROMPT Checking cloud bucket status...
DECLARE
vCloudFileCount NUMBER := 0;
vCredentialName VARCHAR2(100);
vDataBucketUri VARCHAR2(500);
BEGIN
-- Get bucket URI and credential
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ODS');
vCredentialName := CT_MRDS.ENV_MANAGER.gvCredentialName;
DBMS_OUTPUT.PUT_LINE('Checking ODS bucket: ' || vDataBucketUri);
-- Count remaining files in cloud bucket
BEGIN
FOR rec IN (
SELECT object_name
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => vCredentialName,
location_uri => vDataBucketUri
))
WHERE object_name LIKE 'ODS/TOP/%'
) LOOP
vCloudFileCount := vCloudFileCount + 1;
IF vCloudFileCount <= 3 THEN -- Show first 3 files
DBMS_OUTPUT.PUT_LINE(' Cloud file: ' || rec.object_name);
END IF;
END LOOP;
IF vCloudFileCount = 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: No TOP files found in cloud bucket - clean');
ELSE
DBMS_OUTPUT.PUT_LINE('INFO: ' || vCloudFileCount || ' TOP file(s) still in cloud bucket');
DBMS_OUTPUT.PUT_LINE(' Note: Cloud files are not automatically deleted by rollback');
DBMS_OUTPUT.PUT_LINE(' Run 90_MARS_1005_rollback_delete_csv_files.sql to remove them');
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Cannot check cloud bucket: ' || SQLERRM);
END;
END;
/
-- Check 4: Verify rollback logs were created
PROMPT Checking rollback operation logs...
DECLARE
vRollbackLogs NUMBER := 0;
BEGIN
SELECT COUNT(*)
INTO vRollbackLogs
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005-ROLLBACK'
AND LOG_TIMESTAMP >= SYSDATE - 1/24; -- Last hour
IF vRollbackLogs > 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: Rollback operation logs found: ' || vRollbackLogs);
-- Show recent rollback logs
FOR rec IN (
SELECT TO_CHAR(LOG_TIMESTAMP, 'HH24:MI:SS') AS LOG_TIME,
PROCEDURE_NAME,
LOG_LEVEL,
SUBSTR(LOG_MESSAGE, 1, 50) AS MESSAGE
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005-ROLLBACK'
AND LOG_TIMESTAMP >= SYSDATE - 1/24
ORDER BY LOG_TIMESTAMP DESC
) LOOP
DBMS_OUTPUT.PUT_LINE(' ' || rec.LOG_TIME || ' [' || rec.LOG_LEVEL || '] ' ||
rec.PROCEDURE_NAME || ': ' || rec.MESSAGE);
END LOOP;
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: Warning: No rollback operation logs found');
END IF;
END;
/
PROMPT
PROMPT =========================================================================
PROMPT MARS-1005 Rollback Verification Summary
PROMPT =========================================================================
DECLARE
vRemainingFiles NUMBER := 0;
vRemainingLogs NUMBER := 0;
vRollbackStatus VARCHAR2(20);
BEGIN
-- Count remaining registrations
SELECT COUNT(*)
INTO vRemainingFiles
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE PROCESS_NAME = 'MARS-1005'
AND RECEPTION_DATE >= SYSDATE - 7;
-- Count remaining process logs
SELECT COUNT(*)
INTO vRemainingLogs
FROM CT_MRDS.A_PROCESS_LOG
WHERE PROCESS_NAME = 'MARS-1005'
AND LOG_TIMESTAMP >= SYSDATE - 7;
-- Determine rollback status
IF vRemainingFiles = 0 AND vRemainingLogs = 0 THEN
vRollbackStatus := 'COMPLETE';
ELSIF vRemainingFiles = 0 OR vRemainingLogs = 0 THEN
vRollbackStatus := 'PARTIAL';
ELSE
vRollbackStatus := 'INCOMPLETE';
END IF;
DBMS_OUTPUT.PUT_LINE('MARS-1005 Rollback Status: ' || vRollbackStatus);
DBMS_OUTPUT.PUT_LINE('- Remaining file registrations: ' || vRemainingFiles);
DBMS_OUTPUT.PUT_LINE('- Remaining process logs: ' || vRemainingLogs);
IF vRollbackStatus = 'COMPLETE' THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: Rollback completed successfully - system clean');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: Rollback incomplete - manual cleanup may be required');
END IF;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Note: Cloud bucket files (OCI) are not automatically removed');
DBMS_OUTPUT.PUT_LINE(' Use OCI console or DBMS_CLOUD commands for file deletion if needed');
END;
/
PROMPT =========================================================================
PROMPT Rollback Verification Completed
PROMPT =========================================================================

View File

@@ -0,0 +1,91 @@
-- ===================================================================
-- MARS-1005 INSTALL SCRIPT: OU_TOP Historical Data Export to ODS Bucket
-- ===================================================================
-- Purpose: One-time bulk export of 6 OU_TOP LEGACY tables to OCI DATA bucket
-- (ODS bucket area, CSV format)
-- Uses DATA_EXPORTER EXPORT_TABLE_DATA with pRegisterExport for file tracking
-- Author: Grzegorz Michalski
-- Date: 2026-03-06
-- Dynamic spool file generation (using SYS_CONTEXT - no DBA privileges required)
-- Log files are automatically created in log/ subdirectory
-- IMPORTANT: Ensure log/ directory exists before SPOOL (use host mkdir)
host mkdir log 2>nul
var filename VARCHAR2(100)
BEGIN
:filename := 'log/INSTALL_MARS_1005_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
SET ECHO OFF
SET TIMING ON
SET SERVEROUTPUT ON SIZE UNLIMITED
SET PAUSE OFF
-- Set current schema context (optional - use when modifying packages in specific schema)
-- ALTER SESSION SET CURRENT_SCHEMA = CT_MRDS;
PROMPT =========================================================================
PROMPT MARS-1005: OU_TOP Historical Data Export to ODS Bucket (One-Time Migration)
PROMPT =========================================================================
PROMPT
PROMPT This script will export 6 OU_TOP LEGACY tables to OCI DATA bucket:
PROMPT
PROMPT TARGET: DATA Bucket / ODS area (CSV format):
PROMPT - OU_TOP.LEGACY_ALLOTMENT -> ODS/TOP/TOP_ALLOTMENT
PROMPT - OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_HEADER -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_HEADER
PROMPT - OU_TOP.LEGACY_ALLOTMENT_MODIFICATION_ITEM -> ODS/TOP/TOP_ALLOTMENT_MODIFICATION_ITEM
PROMPT - OU_TOP.LEGACY_ANNOUNCEMENT -> ODS/TOP/TOP_ANNOUNCEMENT
PROMPT - OU_TOP.LEGACY_FBL_ITEM -> ODS/TOP/TOP_FULLBIDLIST_ITEM
PROMPT - OU_TOP.LEGACY_FULLBID_ARRAY_COMPILED -> ODS/TOP/TOP_FULLBID_ARRAY_COMPILED
PROMPT
PROMPT Key Features:
PROMPT - Files registered in A_SOURCE_FILE_RECEIVED with PROCESS_NAME = 'MARS-1005'
PROMPT - Template table column order matching (CT_ET_TEMPLATES.TOP_*)
PROMPT - ODS/TOP bucket path structure
PROMPT =========================================================================
-- Confirm installation with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with installation, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20001, 'Installation aborted by user');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT =========================================================================
PROMPT Step 1: Export OU_TOP Data to ODS Bucket
PROMPT =========================================================================
@@01_MARS_1005_export_top_data.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 2: Verify Exports (File Registration Check)
PROMPT =========================================================================
@@02_MARS_1005_verify_exports.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 3: Verify Data Integrity (Source vs Exported)
PROMPT =========================================================================
@@03_MARS_1005_verify_data_integrity.sql
PROMPT
PROMPT =========================================================================
PROMPT MARS-1005 Installation - COMPLETED
PROMPT =========================================================================
PROMPT Check the log file for complete installation details.
PROMPT For rollback, use: rollback_mars1005.sql
PROMPT =========================================================================
spool off
quit;

View File

@@ -0,0 +1,81 @@
-- ===================================================================
-- MARS-1005 ROLLBACK SCRIPT: C2D MPEC Data Export Rollback
-- ===================================================================
-- Purpose: Rollback MARS-1005 - Delete exported CSV files and file registrations
-- WARNING: This will DELETE all exported data files and registrations!
-- Author: Grzegorz Michalski
-- Date: 2026-02-12
-- Dynamic spool file generation (using SYS_CONTEXT - no DBA privileges required)
-- IMPORTANT: Ensure log/ directory exists before SPOOL (use host mkdir)
host mkdir log 2>nul
var filename VARCHAR2(100)
BEGIN
:filename := 'log/ROLLBACK_MARS_1005_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
SET ECHO OFF
SET TIMING ON
SET SERVEROUTPUT ON SIZE UNLIMITED
SET PAUSE OFF
PROMPT =========================================================================
PROMPT MARS-1005: Rollback C2D MPEC Data Export
PROMPT =========================================================================
PROMPT WARNING: This will DELETE exported CSV files and file registrations!
PROMPT - ODS bucket: mrds_data_dev/ODS/C2D/
PROMPT - File registrations: A_SOURCE_FILE_RECEIVED entries
PROMPT
PROMPT Only proceed if export failed and needs to be restarted!
PROMPT =========================================================================
-- Confirm rollback with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with rollback, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20001, 'Rollback aborted by user');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT =========================================================================
PROMPT Step 1: Delete Exported CSV Files from DATA Bucket
PROMPT =========================================================================
@@90_MARS_1005_rollback_delete_csv_files.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 2: Delete File Registrations
PROMPT =========================================================================
@@91_MARS_1005_rollback_file_registrations.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 3: Clean Process Logs
PROMPT =========================================================================
@@92_MARS_1005_rollback_process_logs.sql
PROMPT
PROMPT =========================================================================
PROMPT Step 4: Verify Rollback Completion
PROMPT =========================================================================
@@99_MARS_1005_verify_rollback.sql
PROMPT
PROMPT =========================================================================
PROMPT MARS-1005 Rollback - COMPLETED
PROMPT =========================================================================
PROMPT Check the log file for complete rollback details.
PROMPT =========================================================================
spool off
quit;

View File

@@ -0,0 +1,17 @@
--------------------------------------------------------
-- DDL for Table C2D_A_UC_DISSEM_METADATA_LOADS
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_A_UC_DISSEM_METADATA_LOADS"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"C2D_VERSION" VARCHAR2(3 CHAR) COLLATE "USING_NLS_COMP",
"FILE_CREATION_DATE" DATE,
"NO_OF_SUSPECT_RECORDS" NUMBER(10,0),
"REPORTING_NCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"SNAPSHOT_DATE" DATE,
"PROCESSED_TO_DWH" CHAR(1 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_A_UC_DISSEM_METADATA_LOADS
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_A_UC_DISSEM_METADATA_LOADS" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_A_UC_DISSEM_METADATA_LOADS" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,52 @@
--------------------------------------------------------
-- DDL for Table C2D_ELA_INFO_REPLICATION
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_ELA_INFO_REPLICATION"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"VERSION" VARCHAR2(5 BYTE) COLLATE "USING_NLS_COMP",
"ID" NUMBER(28,0),
"RIAD_CODE" VARCHAR2(30 BYTE) COLLATE "USING_NLS_COMP",
"INSTITUTION_NAME" VARCHAR2(200 BYTE) COLLATE "USING_NLS_COMP",
"ELA_MATURITY_DATE" DATE,
"ELA_VALUE_DATE" DATE,
"ELA_BASE" NUMBER(28,10),
"ELA_DENOMINATION" VARCHAR2(3 BYTE) COLLATE "USING_NLS_COMP",
"ELA" NUMBER(28,10),
"INTEREST_RATE_APPLIED" NUMBER(28,10),
"ISIN_CODE" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"NOMINAL_AMOUNT_SUBMITTED" NUMBER(28,10),
"COLLATERAL_VALUE_BEFORE_HAIRCU" NUMBER(28,10),
"COLLATERAL_VALUE_AFTER_HAIRCUT" NUMBER(28,10),
"HAIRCUT" NUMBER(28,10),
"ELA_ASSET_GROUP" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"DENOMINATION" VARCHAR2(3 BYTE) COLLATE "USING_NLS_COMP",
"ASSET_TYPE" VARCHAR2(4 BYTE) COLLATE "USING_NLS_COMP",
"DOMESTIC_OR_XBORDER" VARCHAR2(20 BYTE) COLLATE "USING_NLS_COMP",
"ABS_TYPE" VARCHAR2(40 BYTE) COLLATE "USING_NLS_COMP",
"NUMBER_OF_AGGREG_ASSETS" NUMBER(28,0),
"NUMBER_OF_AGGREG_DEBTORS" NUMBER(28,0),
"GUARANTEE" VARCHAR2(200 BYTE) COLLATE "USING_NLS_COMP",
"ISSUER_CODE" VARCHAR2(30 BYTE) COLLATE "USING_NLS_COMP",
"ISSUER_NAME" VARCHAR2(200 BYTE) COLLATE "USING_NLS_COMP",
"ISSUER_RESIDENCE" VARCHAR2(3 BYTE) COLLATE "USING_NLS_COMP",
"ISSUER_GROUP" VARCHAR2(4 BYTE) COLLATE "USING_NLS_COMP",
"RATING_OF_ASSET" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"RATING_OF_THE_IS" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"RATING_OF_THE_GU" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"PRICE_INFORMATION" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"VALUATION_METHODOLOGY" VARCHAR2(15 BYTE) COLLATE "USING_NLS_COMP",
"TYPE_OF_OPERATION" VARCHAR2(30 BYTE) COLLATE "USING_NLS_COMP",
"NCB_COMMENT" VARCHAR2(200 BYTE) COLLATE "USING_NLS_COMP",
"REPORTING_NCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"SNAPSHOT_DATE" DATE,
"IS_CORRECTION" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"META_INFORMATION_ID" NUMBER(19,0),
"META_INFORMATION_TYPE" VARCHAR2(50 CHAR) COLLATE "USING_NLS_COMP",
"USED_SNAPSHOT_DATE" DATE,
"PRICING_DATE" DATE
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_ELA_INFO_REPLICATION
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_ELA_INFO_REPLICATION" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_ELA_INFO_REPLICATION" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,19 @@
--------------------------------------------------------
-- DDL for Table C2D_MPEC_ADMIN
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_MPEC_ADMIN"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"SENDER_ISO_CODE" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"SENDER_BUSINESS_AREA" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"RECEIVER_ISO_CODE" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"RECEIVER_BUSINESS_AREA" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"DATASET_ID" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"CREATION_TIME" DATE,
"IREF" NUMBER(19,0),
"SUBJECT" VARCHAR2(2000 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_MPEC_ADMIN
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_ADMIN" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_ADMIN" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,96 @@
--------------------------------------------------------
-- DDL for Table C2D_MPEC_CONTENT
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"HOST" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"ID" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"MPEC_BIC" VARCHAR2(11 CHAR) COLLATE "USING_NLS_COMP",
"RTGS_ACCESS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"INTRADAY_CREDIT_FACILITY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"MRR_TYPE" VARCHAR2(30 CHAR) COLLATE "USING_NLS_COMP",
"MRR_INTERMEDIARY_HOST" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"MRR_INTERMEDIARY_ID" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"MRR_AVG_PROV_SUSP_STDT" DATE,
"MRR_AVG_PROV_SUSP_ENDT" DATE,
"MRR_EXEMPTION_STDT" DATE,
"MRR_EXEMPTION_ENDT" DATE,
"MRR_EXEMPTION_REORG_STDT" DATE,
"MRR_EXEMPTION_REORG_ENDT" DATE,
"PRUDENTIAL_SUPERVISION" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"ELIG_DEPOSIT_FACILITY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DEPOSIT_FACILITY_SUSP_STDT" DATE,
"DEPOSIT_FACILITY_SUSP_ENDT" DATE,
"DEPOSIT_FACILITY_EXCL_STDT" DATE,
"DEPOSIT_FACILITY_EXCL_ENDT" DATE,
"DEPOSIT_FACILITY_LIMIT_STDT" DATE,
"DEPOSIT_FACILITY_LIMIT_ENDT" DATE,
"ELIG_MARGINAL_LENDING_FACILITY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"MARG_LEND_FACILITY_SUSP_STDT" DATE,
"MARG_LEND_FACILITY_SUSP_ENDT" DATE,
"MARG_LEND_FACILITY_EXCL_STDT" DATE,
"MARG_LEND_FACILITY_EXCL_ENDT" DATE,
"MARG_LEND_FACILITY_LIMIT_STDT" DATE,
"MARG_LEND_FACILITY_LIMIT_ENDT" DATE,
"ELIG_ECB_DEBT_CERTIFICATE" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ECB_DEBT_CERTIF_SUSP_STDT" DATE,
"ECB_DEBT_CERTIF_SUSP_ENDT" DATE,
"ECB_DEBT_CERTIF_EXCL_STDT" DATE,
"ECB_DEBT_CERTIF_EXCL_ENDT" DATE,
"ECB_DEBT_CERTIF_LIMIT_STDT" DATE,
"ECB_DEBT_CERTIF_LIMIT_ENDT" DATE,
"ELIG_STD_TENDER_OPERATIONS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"STD_TENDER_OPE_SUSP_STDT" DATE,
"STD_TENDER_OPE_SUSP_ENDT" DATE,
"STD_TENDER_OPE_EXCL_STDT" DATE,
"STD_TENDER_OPE_EXCL_ENDT" DATE,
"STD_TENDER_OPE_LIMIT_STDT" DATE,
"STD_TENDER_OPE_LIMIT_ENDT" DATE,
"ELIG_FTRO_ABSORBING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FINE_TUN_REVOPE_ABS_SUSP_STDT" DATE,
"FINE_TUN_REVOPE_ABS_SUSP_ENDT" DATE,
"FINE_TUN_REVOPE_ABS_EXCL_STDT" DATE,
"FINE_TUN_REVOPE_ABS_EXCL_ENDT" DATE,
"FINE_TUN_REVOPE_ABS_LIMIT_STDT" DATE,
"FINE_TUN_REVOPE_ABS_LIMIT_ENDT" DATE,
"ELIG_FTRO_PROVIDING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FINE_TUN_REVOP_PROV_SUSP_STDT" DATE,
"FINE_TUN_REVOP_PROV_SUSP_ENDT" DATE,
"FINE_TUN_REVOP_PROV_EXCL_STDT" DATE,
"FINE_TUN_REVOP_PROV_EXCL_ENDT" DATE,
"FINE_TUN_REVOP_PROV_LIMIT_STDT" DATE,
"FINE_TUN_REVOP_PROV_LIMIT_ENDT" DATE,
"ELIG_FIX_TERM_DEPOSIT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FIX_TERM_DEPOSIT_SUSP_STDT" DATE,
"FIX_TERM_DEPOSIT_SUSP_ENDT" DATE,
"FIX_TERM_DEPOSIT_EXCL_STDT" DATE,
"FIX_TERM_DEPOSIT_EXCL_ENDT" DATE,
"FIX_TERM_DEPOSIT_LIMIT_STDT" DATE,
"FIX_TERM_DEPOSIT_LIMIT_ENDT" DATE,
"ELIG_FX_SWAP_ABSORBING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FX_SWAP_ABS_SUSP_STDT" DATE,
"FX_SWAP_ABS_SUSP_ENDT" DATE,
"FX_SWAP_ABS_EXCL_STDT" DATE,
"FX_SWAP_ABS_EXCL_ENDT" DATE,
"FX_SWAP_ABS_LIMIT_STDT" DATE,
"FX_SWAP_ABS_LIMIT_ENDT" DATE,
"ELIG_FX_SWAP_PROVIDING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FX_SWAP_PROV_SUSP_STDT" DATE,
"FX_SWAP_PROV_SUSP_ENDT" DATE,
"FX_SWAP_PROV_EXCL_STDT" DATE,
"FX_SWAP_PROV_EXCL_ENDT" DATE,
"FX_SWAP_PROV_LIMIT_STDT" DATE,
"FX_SWAP_PROV_LIMIT_ENDT" DATE,
"ECB_ENTRY_DATE" DATE,
"STATUS" VARCHAR2(10 CHAR) COLLATE "USING_NLS_COMP",
"ACTION" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"USD_OPERATIONS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DELETION_REASON" VARCHAR2(30 BYTE) COLLATE "USING_NLS_COMP",
"NCB_COMMENT" VARCHAR2(255 BYTE) COLLATE "USING_NLS_COMP",
"CLM_ACCESS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_MPEC_CONTENT
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,13 @@
--------------------------------------------------------
-- DDL for Table C2D_MPEC_CONTENT_CRITERION
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_CRITERION"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"A_MPEC_CONTENT_FK" NUMBER(38,0),
"CRITERION" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_MPEC_CONTENT_CRITERION
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_CRITERION" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_CRITERION" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,13 @@
--------------------------------------------------------
-- DDL for Table C2D_MPEC_CONTENT_CRITERION_FULL
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_CRITERION_FULL"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"A_MPEC_CONTENT_FK" NUMBER(38,0),
"CRITERION" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_MPEC_CONTENT_CRITERION_FULL
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_CRITERION_FULL" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_CRITERION_FULL" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,96 @@
--------------------------------------------------------
-- DDL for Table C2D_MPEC_CONTENT_FULL
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_FULL"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"HOST" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"ID" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"MPEC_BIC" VARCHAR2(11 CHAR) COLLATE "USING_NLS_COMP",
"RTGS_ACCESS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"INTRADAY_CREDIT_FACILITY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"MRR_TYPE" VARCHAR2(30 CHAR) COLLATE "USING_NLS_COMP",
"MRR_INTERMEDIARY_HOST" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"MRR_INTERMEDIARY_ID" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"MRR_AVG_PROV_SUSP_STDT" DATE,
"MRR_AVG_PROV_SUSP_ENDT" DATE,
"MRR_EXEMPTION_STDT" DATE,
"MRR_EXEMPTION_ENDT" DATE,
"MRR_EXEMPTION_REORG_STDT" DATE,
"MRR_EXEMPTION_REORG_ENDT" DATE,
"PRUDENTIAL_SUPERVISION" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"ELIG_DEPOSIT_FACILITY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DEPOSIT_FACILITY_SUSP_STDT" DATE,
"DEPOSIT_FACILITY_SUSP_ENDT" DATE,
"DEPOSIT_FACILITY_EXCL_STDT" DATE,
"DEPOSIT_FACILITY_EXCL_ENDT" DATE,
"DEPOSIT_FACILITY_LIMIT_STDT" DATE,
"DEPOSIT_FACILITY_LIMIT_ENDT" DATE,
"ELIG_MARGINAL_LENDING_FACILITY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"MARG_LEND_FACILITY_SUSP_STDT" DATE,
"MARG_LEND_FACILITY_SUSP_ENDT" DATE,
"MARG_LEND_FACILITY_EXCL_STDT" DATE,
"MARG_LEND_FACILITY_EXCL_ENDT" DATE,
"MARG_LEND_FACILITY_LIMIT_STDT" DATE,
"MARG_LEND_FACILITY_LIMIT_ENDT" DATE,
"ELIG_ECB_DEBT_CERTIFICATE" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ECB_DEBT_CERTIF_SUSP_STDT" DATE,
"ECB_DEBT_CERTIF_SUSP_ENDT" DATE,
"ECB_DEBT_CERTIF_EXCL_STDT" DATE,
"ECB_DEBT_CERTIF_EXCL_ENDT" DATE,
"ECB_DEBT_CERTIF_LIMIT_STDT" DATE,
"ECB_DEBT_CERTIF_LIMIT_ENDT" DATE,
"ELIG_STD_TENDER_OPERATIONS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"STD_TENDER_OPE_SUSP_STDT" DATE,
"STD_TENDER_OPE_SUSP_ENDT" DATE,
"STD_TENDER_OPE_EXCL_STDT" DATE,
"STD_TENDER_OPE_EXCL_ENDT" DATE,
"STD_TENDER_OPE_LIMIT_STDT" DATE,
"STD_TENDER_OPE_LIMIT_ENDT" DATE,
"ELIG_FTRO_ABSORBING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FINE_TUN_REVOPE_ABS_SUSP_STDT" DATE,
"FINE_TUN_REVOPE_ABS_SUSP_ENDT" DATE,
"FINE_TUN_REVOPE_ABS_EXCL_STDT" DATE,
"FINE_TUN_REVOPE_ABS_EXCL_ENDT" DATE,
"FINE_TUN_REVOPE_ABS_LIMIT_STDT" DATE,
"FINE_TUN_REVOPE_ABS_LIMIT_ENDT" DATE,
"ELIG_FTRO_PROVIDING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FINE_TUN_REVOP_PROV_SUSP_STDT" DATE,
"FINE_TUN_REVOP_PROV_SUSP_ENDT" DATE,
"FINE_TUN_REVOP_PROV_EXCL_STDT" DATE,
"FINE_TUN_REVOP_PROV_EXCL_ENDT" DATE,
"FINE_TUN_REVOP_PROV_LIMIT_STDT" DATE,
"FINE_TUN_REVOP_PROV_LIMIT_ENDT" DATE,
"ELIG_FIX_TERM_DEPOSIT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FIX_TERM_DEPOSIT_SUSP_STDT" DATE,
"FIX_TERM_DEPOSIT_SUSP_ENDT" DATE,
"FIX_TERM_DEPOSIT_EXCL_STDT" DATE,
"FIX_TERM_DEPOSIT_EXCL_ENDT" DATE,
"FIX_TERM_DEPOSIT_LIMIT_STDT" DATE,
"FIX_TERM_DEPOSIT_LIMIT_ENDT" DATE,
"ELIG_FX_SWAP_ABSORBING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FX_SWAP_ABS_SUSP_STDT" DATE,
"FX_SWAP_ABS_SUSP_ENDT" DATE,
"FX_SWAP_ABS_EXCL_STDT" DATE,
"FX_SWAP_ABS_EXCL_ENDT" DATE,
"FX_SWAP_ABS_LIMIT_STDT" DATE,
"FX_SWAP_ABS_LIMIT_ENDT" DATE,
"ELIG_FX_SWAP_PROVIDING" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"FX_SWAP_PROV_SUSP_STDT" DATE,
"FX_SWAP_PROV_SUSP_ENDT" DATE,
"FX_SWAP_PROV_EXCL_STDT" DATE,
"FX_SWAP_PROV_EXCL_ENDT" DATE,
"FX_SWAP_PROV_LIMIT_STDT" DATE,
"FX_SWAP_PROV_LIMIT_ENDT" DATE,
"ECB_ENTRY_DATE" DATE,
"STATUS" VARCHAR2(10 CHAR) COLLATE "USING_NLS_COMP",
"ACTION" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"USD_OPERATIONS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DELETION_REASON" VARCHAR2(30 CHAR) COLLATE "USING_NLS_COMP",
"NCB_COMMENT" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"CLM_ACCESS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_MPEC_CONTENT_FULL
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_FULL" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_CONTENT_FULL" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,25 @@
--------------------------------------------------------
-- DDL for Table C2D_MPEC_MID_FULL
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_MPEC_MID_FULL"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"RIAD_CODE" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"BIC" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"COUNTRY_OF_REGISTRATION" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"NAME" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"BOX" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"ADDRESS" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"POSTAL" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"CITY" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"CATEGORY" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"HEAD_COUNTRY_OF_REGISTRATION" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"HEAD_NAME" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"HEAD_RIAD_CODE" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"RESERVE" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"EXEMPT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_MPEC_MID_FULL
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_MID_FULL" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_MPEC_MID_FULL" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,40 @@
--------------------------------------------------------
-- DDL for Table C2D_UC_MA_DISSEM
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_UC_MA_DISSEM"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"REPORTING_NCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"SNAPSHOT_DATE" DATE,
"FILE_CREATION_DATE" DATE,
"MFI_ID" VARCHAR2(256 CHAR) COLLATE "USING_NLS_COMP",
"ISIN_CODE" VARCHAR2(12 CHAR) COLLATE "USING_NLS_COMP",
"OTHER_REG_NO" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"NOM_AMT_SUBMITTED" NUMBER(14,8),
"COLL_BEFORE_HAIRCUTS" NUMBER(14,8),
"COLL_AFTER_HAIRCUTS" NUMBER(14,8),
"TYPE_OF_SYSTEM" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"TYPE_OF_OPERATION" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"DOM_OR_XBORDER" VARCHAR2(12 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_CAS" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_CRED_PROVIDER" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_CLASS" VARCHAR2(8 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_RATING_ENUM_VALUE" VARCHAR2(15 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_RATING_NUMBER_VALUE" NUMBER(9,8),
"NCB_COMMENT" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"MOBILISATION_CHANNEL" VARCHAR2(24 CHAR) COLLATE "USING_NLS_COMP",
"CCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"INVESTOR_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"INTERMEDIARY_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"TRIPARTY_AGENT" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"SUSPECT_ID" NUMBER(10,0),
"QUALITY_CHECK_STATUS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_CODE" VARCHAR2(30 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_MESSAGE" VARCHAR2(500 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_POSITION_IN_FILE" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_UC_MA_DISSEM
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_UC_MA_DISSEM" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_UC_MA_DISSEM" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,33 @@
--------------------------------------------------------
-- DDL for Table C2D_UC_NMA_DECC_DISSEM
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_UC_NMA_DECC_DISSEM"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"REPORTING_NCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"SNAPSHOT_DATE" DATE,
"FILE_CREATION_DATE" DATE,
"MFI_ID" VARCHAR2(256 CHAR) COLLATE "USING_NLS_COMP",
"ISIN_CODE" VARCHAR2(12 CHAR) COLLATE "USING_NLS_COMP",
"NOM_AMT_SUBMITTED" NUMBER(13,8),
"TYPE_OF_SYSTEM" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"TYPE_OF_OPERATION" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"DOM_OR_XBORDER" VARCHAR2(12 CHAR) COLLATE "USING_NLS_COMP",
"NON_MKT_ASSET_TYPE" VARCHAR2(20 CHAR) COLLATE "USING_NLS_COMP",
"NCB_COMMENT" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"MOBILISATION_CHANNEL" VARCHAR2(24 CHAR) COLLATE "USING_NLS_COMP",
"CCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"INVESTOR_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"INTERMEDIARY_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"TRIPARTY_AGENT" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"SUSPECT_ID" NUMBER(10,0),
"QUALITY_CHECK_STATUS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_CODE" VARCHAR2(30 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_MESSAGE" VARCHAR2(500 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_POSITION_IN_FILE" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_UC_NMA_DECC_DISSEM
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_UC_NMA_DECC_DISSEM" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_UC_NMA_DECC_DISSEM" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,71 @@
--------------------------------------------------------
-- DDL for Table C2D_UC_NMA_DISSEM
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."C2D_UC_NMA_DISSEM"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"REPORTING_NCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"SNAPSHOT_DATE" DATE,
"FILE_CREATION_DATE" DATE,
"MFI_ID" VARCHAR2(256 CHAR) COLLATE "USING_NLS_COMP",
"OTHER_REG_NO" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"TYPE_OF_SYSTEM" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"TYPE_OF_OPERATION" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"DOM_OR_XBORDER" VARCHAR2(12 CHAR) COLLATE "USING_NLS_COMP",
"NON_MKT_ASSET_TYPE" VARCHAR2(20 CHAR) COLLATE "USING_NLS_COMP",
"MATURITY_DATE" DATE,
"INTEREST_PAYMENT_TYPE" VARCHAR2(8 CHAR) COLLATE "USING_NLS_COMP",
"CAP" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"REFERENCE_RATE" VARCHAR2(9 CHAR) COLLATE "USING_NLS_COMP",
"REFERENCE_RATE_COMMENT" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"COLL_BEFORE_HAIRCUTS" NUMBER(14,8),
"COLL_AFTER_HAIRCUTS" NUMBER(14,8),
"NO_AGGR_DEBTORS" NUMBER(10,0),
"ELIGIBLE_VIA_GUAR" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_TYPE" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_NAME" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_ID_TYPE" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_ID" VARCHAR2(256 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_CLASS" VARCHAR2(50 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_RESIDENCE" VARCHAR2(3 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_CAS" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_CRED_PROV" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_RATING_ENUM_VALUE" VARCHAR2(15 CHAR) COLLATE "USING_NLS_COMP",
"DEBTOR_RATING_NUMBER_VALUE" NUMBER(9,8),
"GUAR_TYPE" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_NAME" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_ID_TYPE" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_ID" VARCHAR2(256 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_CLASS" VARCHAR2(50 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_RESIDENCE" VARCHAR2(3 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_CRED_CAS" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_CRED_PROV" VARCHAR2(100 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_RATING_ENUM_VALUE" VARCHAR2(15 CHAR) COLLATE "USING_NLS_COMP",
"GUAR_RATING_NUMBER_VALUE" NUMBER(9,8),
"NO_AGGR_ASSETS" NUMBER(10,0),
"DENOMINATION" VARCHAR2(3 CHAR) COLLATE "USING_NLS_COMP",
"SECURED_FLAG" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"RESIDUAL_MATURITY" VARCHAR2(5 CHAR) COLLATE "USING_NLS_COMP",
"BUCKET_SIZE" VARCHAR2(10 CHAR) COLLATE "USING_NLS_COMP",
"NCB_COMMENT" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"VALUATION_METHODOLOGY" VARCHAR2(11 CHAR) COLLATE "USING_NLS_COMP",
"NOM_AMT_SUBMITTED" NUMBER(14,8),
"RESET_PERIOD_MORE_ONE_YEAR" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"MOBILISATION_CHANNEL" VARCHAR2(24 CHAR) COLLATE "USING_NLS_COMP",
"CCB" VARCHAR2(2 CHAR) COLLATE "USING_NLS_COMP",
"INVESTOR_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"INTERMEDIARY_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"ISSUER_SSS" VARCHAR2(6 CHAR) COLLATE "USING_NLS_COMP",
"SUSPECT_ID" NUMBER(10,0),
"QUALITY_CHECK_STATUS" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_CODE" VARCHAR2(30 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_MESSAGE" VARCHAR2(500 CHAR) COLLATE "USING_NLS_COMP",
"ERROR_POSITION_IN_FILE" VARCHAR2(200 CHAR) COLLATE "USING_NLS_COMP",
"OA_ID" VARCHAR2(50 CHAR) COLLATE "USING_NLS_COMP",
"CONTRACT_ID" VARCHAR2(60 CHAR) COLLATE "USING_NLS_COMP",
"INSTRMNT_ID" VARCHAR2(60 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table C2D_UC_NMA_DISSEM
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."C2D_UC_NMA_DISSEM" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."C2D_UC_NMA_DISSEM" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,23 @@
--------------------------------------------------------
-- DDL for Table CEPH_PRICING
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."CEPH_PRICING"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"PRICE_DATE" TIMESTAMP (6) WITH TIME ZONE,
"RETURNCODE" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"ISIN_CODE" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"PRICE" NUMBER(28,10),
"WARNING_CODE" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"ACCRUED_INTEREST" NUMBER(28,10),
"POOL_FACTOR" NUMBER(28,10),
"PRICE_NATURE" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"WAL" NUMBER(28,10),
"CLEAN_PRICE_WO_MARKDOWN" NUMBER(28,10),
"ACCRUED_INTEREST_WO_MARKDOWN" NUMBER(28,10),
"THEORETICAL_PRICE" NUMBER(28,10)
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table CEPH_PRICING
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."CEPH_PRICING" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."CEPH_PRICING" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,138 @@
--------------------------------------------------------
-- DDL for Table CSDB_DEBT
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."CSDB_DEBT"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"NEWUPDATED" DATE,
"IDLOADDATE_DIM" DATE,
"EXTERNALCODE_ISIN" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"EXTERNALCODETYPE_NC" VARCHAR2(124 CHAR) COLLATE "USING_NLS_COMP",
"EXTERNALCODE_NATIONAL" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRINSTRUMENT" NUMBER(28,0),
"SHORTNAME" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"IDIRDEPOSITORY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"IDIRDEBTTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRASSETSECTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_CFI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAI_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCURRENCY_NOMINAL" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"AMOUNTISSUED" NUMBER(28,10),
"AMOUNTOUTSTANDING" NUMBER(28,10),
"AMOUNTOUTSTANDING_EUR" NUMBER(28,10),
"POOLFACTOR" NUMBER(28,10),
"ISSUEPRICE" NUMBER(28,10),
"IDISSUEDATE" DATE,
"IDIRCOUPONTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCOUPONFREQUENCY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCURRENCY_COUPON" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATE" NUMBER(28,10),
"COUPONDATE" DATE,
"IDIRREDEMPTIONTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRREDEMPTIONFREQUENCY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCURRENCY_REDEMPTION" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"REDEMPTIONPRICE" NUMBER(28,10),
"IDMATURITYDATE" DATE,
"IDIRORGANISATIONALIASTYPE_IS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUERSOURCECODE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_MFI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_BIC" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_BEI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRORGANISATION_ISSUER" NUMBER(28,0),
"ISSUERNAME" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCOUNTRY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCOUNTRY_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAO" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAO_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_NACE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"PUBLICATIONPRICEDATE" DATE,
"PUBLICATIONPRICE" NUMBER(28,10),
"PUBLICATIONPRICETYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"PUBLICATIONPRICEQUOTATIONBASIS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"MONTHLYAVERAGEPRICE" NUMBER(28,10),
"ACCRUALSTARTDATE" DATE,
"DEBTACCRUALDEBTOR" NUMBER(28,10),
"DEBTACCRUALDEBTOR_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"DEBTACCRUALCREDITOR" NUMBER(28,10),
"DEBTACCRUALCREDITOR_TYP" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ACCRUEDINTEREST" NUMBER(28,10),
"YTMNONOPTIONADJUSTED" NUMBER(28,10),
"ESCB_ISSUER_IDENT" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_ESCBCODETYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDUDCMPPARTY" NUMBER(28,0),
"AMOUNTOUTSTANDINGTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"MARKETCAPITALISATION" NUMBER(28,10),
"MARKETCAPITALISATION_EUR" NUMBER(28,10),
"VA_SECURITYSTATUS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_INSTRSUPPLEMENTARYCLASS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_RESIDUALMATURITYCLASS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_ISINSEC" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_ISELIGIBLEFOREADB" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAI10" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAO10" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRDEBTTYPE_N" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"SENIORITY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_LEI" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"INSTR_ESA2010_CLASS_VALUETYPE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ISS_ESA2010_CLASS_VALUETYPE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VA_SECURITYSTATUSDATE" DATE,
"GROUP_TYPE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"HASEMBEDDEDOPTION" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VOLUMETRADED" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PRIMARYLISTINGNAME" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PRIMARYLISTINGCOUNTRY" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VA_INSTRPORTFLAGS" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VA_BONDDURATION" NUMBER(28,10),
"RESIDUALMATURITY" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ORIGINAL_MATURITY" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_CFIN" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONFIRSTPAYMENTDATE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONLASTPAYMENTDATE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATEUNDERLYINGCODE_ISIN" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATESPREAD" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATEMULTIPLIER" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATECAP" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATEFLOOR" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"IDISSUEDATE_TRANCHE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEPRICE_TRANCHE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VA_ISPRIVATEPLACEMENT" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"RIAD_CODE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"RIAD_OUID" NUMBER(38,0),
"ESG1" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ESG2" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ESG3" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"STRIP" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DEPOSITORY_RECEIPT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"RULE_144A" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"REG_S" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"WARRANT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"CSEC_RELEVANCE_STOCK" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"CSEC_RELEVANCE_GROSS_ISSUANCE" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"CSEC_RELEVANCE_REDEMPTION" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ACCRUING_COUPON" NUMBER(28,10),
"ACCRUING_DISCOUNT" NUMBER(28,10),
"STEPID" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PROGRAMNAME" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PROGRAMCEILING" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PROGRAMSTATUS" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ISSUERNACE21SECTOR" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"INSTRUMENTQUOTATIONBASIS" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER38" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER39" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER40" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER41" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER42" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER43" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER44" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER45" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER46" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER47" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER48" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER49" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER50" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

View File

@@ -0,0 +1,6 @@
--------------------------------------------------------
-- Constraints for Table CSDB_DEBT
--------------------------------------------------------
ALTER TABLE "CT_ET_TEMPLATES"."CSDB_DEBT" MODIFY ("A_KEY" NOT NULL ENABLE);
ALTER TABLE "CT_ET_TEMPLATES"."CSDB_DEBT" MODIFY ("A_WORKFLOW_HISTORY_KEY" NOT NULL ENABLE);

View File

@@ -0,0 +1,138 @@
--------------------------------------------------------
-- DDL for Table CSDB_DEBT_DAILY
--------------------------------------------------------
CREATE TABLE "CT_ET_TEMPLATES"."CSDB_DEBT_DAILY"
( "A_KEY" NUMBER(38,0),
"A_WORKFLOW_HISTORY_KEY" NUMBER(38,0),
"NEWUPDATED" DATE,
"IDLOADDATE_DIM" DATE,
"EXTERNALCODE_ISIN" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"EXTERNALCODETYPE_NC" VARCHAR2(124 CHAR) COLLATE "USING_NLS_COMP",
"EXTERNALCODE_NATIONAL" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRINSTRUMENT" NUMBER(28,0),
"SHORTNAME" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"IDIRDEPOSITORY" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"IDIRDEBTTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRASSETSECTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_CFI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAI_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCURRENCY_NOMINAL" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"AMOUNTISSUED" NUMBER(28,10),
"AMOUNTOUTSTANDING" NUMBER(28,10),
"AMOUNTOUTSTANDING_EUR" NUMBER(28,10),
"POOLFACTOR" NUMBER(28,10),
"ISSUEPRICE" NUMBER(28,10),
"IDISSUEDATE" DATE,
"IDIRCOUPONTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCOUPONFREQUENCY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCURRENCY_COUPON" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATE" NUMBER(28,10),
"COUPONDATE" DATE,
"IDIRREDEMPTIONTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRREDEMPTIONFREQUENCY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCURRENCY_REDEMPTION" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"REDEMPTIONPRICE" NUMBER(28,10),
"IDMATURITYDATE" DATE,
"IDIRORGANISATIONALIASTYPE_IS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUERSOURCECODE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_MFI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_BIC" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_BEI" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRORGANISATION_ISSUER" NUMBER(28,0),
"ISSUERNAME" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCOUNTRY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCOUNTRY_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAO" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAO_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_NACE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"PUBLICATIONPRICEDATE" DATE,
"PUBLICATIONPRICE" NUMBER(28,10),
"PUBLICATIONPRICETYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"PUBLICATIONPRICEQUOTATIONBASIS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"MONTHLYAVERAGEPRICE" NUMBER(28,10),
"ACCRUALSTARTDATE" DATE,
"DEBTACCRUALDEBTOR" NUMBER(28,10),
"DEBTACCRUALDEBTOR_DM" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"DEBTACCRUALCREDITOR" NUMBER(28,10),
"DEBTACCRUALCREDITOR_TYP" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ACCRUEDINTEREST" NUMBER(28,10),
"YTMNONOPTIONADJUSTED" NUMBER(28,10),
"ESCB_ISSUER_IDENT" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ESCB_ISSUER_IDENT_TYP" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDUDCMPPARTY" NUMBER(28,0),
"AMOUNTOUTSTANDINGTYPE" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"MARKETCAPITALISATION" NUMBER(28,10),
"MARKETCAPITALISATION_EUR" NUMBER(28,10),
"VA_SECURITYSTATUS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_INSTRSUPPLEMENTARYCLASS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_RESIDUALMATURITYCLASS" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_ISINSEC" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"VA_ISELIGIBLEFOREADB" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAI10" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRCLASSIFICATIONCODE_ESAO10" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"IDIRDEBTTYPE_N" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"SENIORITY" VARCHAR2(32 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEREXTERNALCODE_LEI" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"INSTR_ESA2010_CLASS_VALUETYPE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ISS_ESA2010_CLASS_VALUETYPE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"SEC_STATUS_DATE" DATE,
"GROUP_TYPE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"HAS_EMBEDDED_OPTION" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VOLUME_TRADED" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PRIMARY_LISTING_NAME" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PRIM_LISTING_RESIDENCY_COUNTRY" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"INSTR_PORTFOLIO_FLAGS" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"BOND_DURATION" NUMBER(28,10),
"RESIDUAL_MATURITY" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ORIGINAL_MATURITY" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"CFIN_CLASSIFICATION" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONFIRSTPAYMENTDATE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONLASTPAYMENTDATE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATEUNDERLYINGCODE_ISIN" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATESPREAD" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATEMULTIPLIER" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATECAP" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"COUPONRATEFLOOR" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"IDISSUEDATE_TRANCHE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ISSUEPRICE_TRANCHE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"VA_ISPRIVATEPLACEMENT" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"RIAD_CODE" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"RIAD_OUID" NUMBER(38,0),
"ESG1" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ESG2" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ESG3" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"STRIP" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"DEPOSITORY_RECEIPT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"RULE_144A" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"REG_S" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"WARRANT" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"CSEC_RELEVANCE_STOCK" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"CSEC_RELEVANCE_GROSS_ISSUANCE" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"CSEC_RELEVANCE_REDEMPTION" VARCHAR2(1 CHAR) COLLATE "USING_NLS_COMP",
"ACCRUING_COUPON" NUMBER(28,10),
"ACCRUING_DISCOUNT" NUMBER(28,10),
"STEPID" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PROGRAMNAME" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PROGRAMCEILING" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PROGRAMSTATUS" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"ISSUERNACE21SECTOR" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"INSTRUMENTQUOTATIONBASIS" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER38" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER39" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER40" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER41" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER42" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER43" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER44" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER45" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER46" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER47" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER48" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER49" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP",
"PLACEHOLDER50" VARCHAR2(4000 CHAR) COLLATE "USING_NLS_COMP"
) DEFAULT COLLATION "USING_NLS_COMP" SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
COLUMN STORE COMPRESS FOR QUERY HIGH ROW LEVEL LOCKING LOGGING
TABLESPACE "DATA" ;

Some files were not shown because too many files have changed in this diff Show More