Compare commits

...

26 Commits

Author SHA1 Message Date
Grzegorz Michalski
a35e28042b feat(FILE_ARCHIVER): Improve archival logic and error handling in FILE_ARCHIVER procedures 2026-03-23 11:48:37 +01:00
Grzegorz Michalski
92feb95ae0 feat(FILE_ARCHIVER): Enhance documentation with new function details and clarify private functions 2026-03-20 13:37:05 +01:00
Grzegorz Michalski
74b8857096 feat(FILE_ARCHIVER): Rename IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH for consistency in configuration 2026-03-20 13:26:37 +01:00
Grzegorz Michalski
24997b1583 feat(MARS-1409): Add prerequisite checks for MARS-1409 objects in installation script 2026-03-20 13:13:26 +01:00
Grzegorz Michalski
eb9b2bc38b feat(FILE_MANAGER): Rename pIsKeepInTrash to pIsKeptInTrash for consistency in parameter naming 2026-03-19 13:29:51 +01:00
Grzegorz Michalski
2ea708a694 Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-19 12:26:27 +01:00
Grzegorz Michalski
12c58f32a3 feat(MARS-1409): Rename IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH across relevant files and update related logic 2026-03-19 12:23:29 +01:00
Grzegorz Michalski
811df6e8b1 feat(FILE_ARCHIVER): Enhance logging messages to include detailed parameters for better error tracking 2026-03-19 12:14:35 +01:00
Grzegorz Michalski
c2e9409e55 feat(FILE_ARCHIVER): Enhance logging by adding parameters to log events for better traceability 2026-03-19 11:51:37 +01:00
Grzegorz Michalski
c96bf2051f Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-19 11:13:37 +01:00
Grzegorz Michalski
5d0e03d7ad feat(MARS-1409): Add DATA_EXPORTER package installation and rollback scripts 2026-03-19 11:13:09 +01:00
Grzegorz Michalski
ffd6c7eeae feat(ENV_MANAGER): Add new error codes for workflow key validation and update package version to 3.3.0
refactor(FILE_MANAGER): Remove redundant error logging for unknown errors
2026-03-19 11:13:02 +01:00
Grzegorz Michalski
bbdf008125 Add DATA_EXPORTER package for comprehensive data export capabilities
- Introduced CT_MRDS.DATA_EXPORTER package to facilitate data exports in CSV and Parquet formats.
- Implemented support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
- Added versioning and detailed version history for tracking changes and improvements.
- Included main export procedures: EXPORT_TABLE_DATA, EXPORT_TABLE_DATA_BY_DATE, and EXPORT_TABLE_DATA_TO_CSV_BY_DATE.
- Enhanced parallel processing capabilities for improved performance during data exports.
2026-03-19 10:50:28 +01:00
Grzegorz Michalski
396e7416f6 feat(FILE_ARCHIVER): Update SQL query in ARCHIVE_TABLE_DATA for improved archival statistics and column order consistency 2026-03-19 09:37:42 +01:00
Grzegorz Michalski
0ed75875ac Refactor MARS-1409: Rollback changes to A_SOURCE_FILE_RECEIVED and related tables
- Dropped A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED with data preservation.
- Removed unnecessary checks for column existence during rollback.
- Updated A_SOURCE_FILE_CONFIG, A_TABLE_STAT, and A_TABLE_STAT_HIST to their pre-MARS-1409 structures, excluding new columns added in MARS-1409.
- Adjusted FILE_ARCHIVER package to reflect changes in statistics handling and archival triggers.
- Revised rollback script to ensure proper order of operations for restoring previous versions of packages and tables.
2026-03-19 08:46:49 +01:00
Grzegorz Michalski
a7db9b67bc Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-18 18:19:19 +01:00
Grzegorz Michalski
ce9b6eeff6 feat(FILE_MANAGER): Update package version to 3.6.3 and enhance ADD_SOURCE_FILE_CONFIG with new parameters for archival control
- Bump package version to 3.6.3 and update build date.
- Add new parameters: pIsArchiveEnabled, pIsKeepInTrash, pArchivalStrategy, pMinimumAgeMonths to ADD_SOURCE_FILE_CONFIG.
- Include pIsWorkflowSuccessRequired parameter to control workflow success requirement for archival.
- Update version history to reflect changes.

feat(A_SOURCE_FILE_CONFIG): Modify table structure to include new archival control flags

- Add IS_WORKFLOW_SUCCESS_REQUIRED column to A_SOURCE_FILE_CONFIG for workflow bypass functionality.
- Update constraints and comments for new columns.
- Ensure backward compatibility with default values.

fix(A_TABLE_STAT, A_TABLE_STAT_HIST): Extend table structures to accommodate new workflow success tracking

- Add IS_WORKFLOW_SUCCESS_REQUIRED column to both A_TABLE_STAT and A_TABLE_STAT_HIST.
- Update comments to clarify the purpose of new columns.

docs(FILE_ARCHIVER_Guide): Revise documentation to reflect new archival features and configurations

- Document new IS_WORKFLOW_SUCCESS_REQUIRED flag and its implications for archival processes.
- Update examples and configurations to align with recent changes in the database schema.
- Ensure clarity on archival strategies and their configurations.
2026-03-18 18:19:04 +01:00
Grzegorz Michalski
0725119b45 feat: Enhance MARS-1409 post-hook scripts to include checks for empty ODS tables and update installation script for workflow key diagnosis 2026-03-17 12:10:18 +01:00
Grzegorz Michalski
896e67bcb9 feat: Refactor A_SOURCE_FILE_CONFIG table structure and update comments for clarity 2026-03-17 10:58:01 +01:00
Grzegorz Michalski
ad5a6f393a feat: Update installation script to reflect expected duration for MARS-1409 post-hook process 2026-03-17 09:54:42 +01:00
Grzegorz Michalski
a4ac132b76 feat: Implement MARS-1409 changes to add ARCHIVAL_STRATEGY and ARCH_MINIMUM_AGE_MONTHS columns to A_TABLE_STAT and A_TABLE_STAT_HIST, and update FILE_ARCHIVER for handling these new fields 2026-03-17 08:23:14 +01:00
Grzegorz Michalski
6468d12349 minor 2026-03-13 13:51:53 +01:00
Grzegorz Michalski
fe0f7bce18 feat: Enhance FILE_ARCHIVER package to handle empty ODS bucket scenarios with improved statistics initialization 2026-03-13 13:34:38 +01:00
Grzegorz Michalski
6b2f60f413 feat: Update FILE_ARCHIVER package to version 3.3.1 with improved handling for empty ODS bucket scenarios 2026-03-13 11:40:19 +01:00
Grzegorz Michalski
ca11debd93 minor 2026-03-13 11:35:11 +01:00
Grzegorz Michalski
24e6bce18c minor changes 2026-03-13 11:34:59 +01:00
50 changed files with 4732 additions and 529 deletions

View File

@@ -22,6 +22,8 @@ DECLARE
vFailedConfigs NUMBER := 0;
vTableNotFound NUMBER := 0;
vSkippedConfigs NUMBER := 0;
vEmptyTables NUMBER := 0;
vHasData NUMBER := 0;
vTableName VARCHAR2(200);
vSQL VARCHAR2(32767);
vRecordsToUpdate NUMBER := 0;
@@ -84,14 +86,37 @@ BEGIN
CONTINUE;
END IF;
-- Pre-check: verify ODS table has accessible data (empty external table throws ORA-29913/KUP-05002)
vHasData := 0;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM (SELECT 1 FROM ' || vTableName || ' t WHERE ROWNUM = 1)'
INTO vHasData;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -29913 OR INSTR(SQLERRM, 'KUP-05002') > 0 THEN
NULL; -- vHasData stays 0
ELSE
RAISE;
END IF;
END;
IF vHasData = 0 THEN
vEmptyTables := vEmptyTables + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - ODS table has no files at storage location (empty): ' || vTableName);
CONTINUE;
END IF;
DBMS_OUTPUT.PUT_LINE('Processing config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID || ')...');
-- Update using ODS table
-- NO_PARALLEL hint required: ODS external tables (OCI Object Storage) fail with ORA-12801 under parallel query
vSQL :=
'UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'SET A_WORKFLOW_HISTORY_KEY = ( ' ||
' SELECT t.A_WORKFLOW_HISTORY_KEY ' ||
' SELECT /*+ NO_PARALLEL(t) */ t.A_WORKFLOW_HISTORY_KEY ' ||
' FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND rownum=1 ' ||
@@ -100,13 +125,13 @@ BEGIN
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'', ''READY_FOR_INGESTION'', ''INGESTED'', ''ARCHIVED'', ''ARCHIVED_AND_TRASHED'', ''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' SELECT /*+ NO_PARALLEL(t) */ 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND rownum=1 ' ||
' )';
EXECUTE IMMEDIATE vSQL USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
commit;
vUpdatedCurrent := SQL%ROWCOUNT;
vUpdatedTotal := vUpdatedTotal + vUpdatedCurrent;
@@ -133,6 +158,7 @@ BEGIN
DBMS_OUTPUT.PUT_LINE(' Total records updated: ' || vUpdatedTotal);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (no NULL records): ' || vSkippedConfigs);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (ODS table not found): ' || vTableNotFound);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (ODS table empty - no files at location): ' || vEmptyTables);
DBMS_OUTPUT.PUT_LINE(' Configurations failed (unexpected errors): ' || vFailedConfigs);
-- Check remaining NULL records - targeted statuses only
@@ -177,3 +203,47 @@ END;
PROMPT
PROMPT Existing workflow keys update completed!
PROMPT
-- ============================================================================
-- Step 2: Set PROCESSING_STATUS = 'INGESTED' for records whose workflow
-- completed successfully (mirrors trigger A_WORKFLOW_HISTORY logic)
-- ============================================================================
PROMPT
PROMPT Updating PROCESSING_STATUS to INGESTED for completed workflows...
DECLARE
vUpdatedIngested NUMBER := 0;
BEGIN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
SET sfr.PROCESSING_STATUS = 'INGESTED',
sfr.PROCESS_NAME = (
SELECT wh.service_name
FROM CT_MRDS.A_WORKFLOW_HISTORY wh
WHERE wh.a_workflow_history_key = sfr.a_workflow_history_key
)
WHERE sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL
AND sfr.PROCESSING_STATUS IN ('READY_FOR_INGESTION')
AND EXISTS (
SELECT 1
FROM CT_MRDS.A_WORKFLOW_HISTORY wh
WHERE wh.a_workflow_history_key = sfr.a_workflow_history_key
AND wh.workflow_successful = 'Y'
);
vUpdatedIngested := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Updated PROCESSING_STATUS to INGESTED: ' || vUpdatedIngested || ' record(s)');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT INGESTED status update completed!
PROMPT

View File

@@ -46,6 +46,16 @@ DECLARE
cMaxPrint CONSTANT NUMBER := 1000;
vPrinted NUMBER;
FUNCTION IS_EXTERNAL_TABLE_EMPTY_ERROR(
pSqlCode NUMBER,
pSqlErrm VARCHAR2
) RETURN BOOLEAN
IS
BEGIN
RETURN pSqlCode IN (-29913, -29400)
OR INSTR(pSqlErrm, 'KUP-05002') > 0;
END;
BEGIN
FOR config_rec IN (
@@ -79,7 +89,7 @@ BEGIN
INTO vTableExists;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -29913 THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
ELSE
RAISE;
@@ -97,63 +107,80 @@ BEGIN
vInBothNoKey := 0;
ELSE
-- ----------------------------------------------------------------
-- [A] In ODS bucket but NOT in A_SOURCE_FILE_RECEIVED
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT t.file$name) ' ||
'FROM ' || vTableName || ' t ' ||
'WHERE t.file$name IS NOT NULL ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
' WHERE sfr.SOURCE_FILE_NAME = t.file$name ' ||
' AND sfr.A_SOURCE_FILE_CONFIG_KEY = :1)'
INTO vOnlyInBucket
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
BEGIN
-- ----------------------------------------------------------------
-- [A] In ODS bucket but NOT in A_SOURCE_FILE_RECEIVED
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT t.file$name) ' ||
'FROM ' || vTableName || ' t ' ||
'WHERE t.file$name IS NOT NULL ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
' WHERE sfr.SOURCE_FILE_NAME = t.file$name ' ||
' AND sfr.A_SOURCE_FILE_CONFIG_KEY = :1)'
INTO vOnlyInBucket
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [B] In A_SOURCE_FILE_RECEIVED (targeted statuses) but NOT in ODS bucket
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(*) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vOnlyInDB
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [B] In A_SOURCE_FILE_RECEIVED (targeted statuses) but NOT in ODS bucket
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(*) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vOnlyInDB
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [C] In both, A_WORKFLOW_HISTORY_KEY IS NOT NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothWithKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [C] In both, A_WORKFLOW_HISTORY_KEY IS NOT NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothWithKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [D] In both, A_WORKFLOW_HISTORY_KEY IS NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothNoKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [D] In both, A_WORKFLOW_HISTORY_KEY IS NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothNoKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
vOnlyInBucket := 0;
SELECT COUNT(*) INTO vOnlyInDB
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = config_rec.A_SOURCE_FILE_CONFIG_KEY
AND sfr.PROCESSING_STATUS IN ('VALIDATED','READY_FOR_INGESTION','INGESTED','ARCHIVED','ARCHIVED_AND_TRASHED','ARCHIVED_AND_PURGED');
vInBothWithKey := 0;
vInBothNoKey := 0;
DBMS_OUTPUT.PUT_LINE(' NOTE: ODS bucket became empty/inaccessible during diagnostics for ' || vTableName || '. Falling back to DB-only counts for [B].');
ELSE
RAISE;
END IF;
END;
END IF; -- vBucketEmpty
@@ -204,16 +231,43 @@ BEGIN
vConfigsWithIssues := vConfigsWithIssues + 1;
DBMS_OUTPUT.PUT_LINE(' [B] Registered files not found in bucket:');
vPrinted := 0;
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME) ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
BEGIN
IF vBucketEmpty THEN
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
ELSE
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME) ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
END IF;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
DBMS_OUTPUT.PUT_LINE(' NOTE: Skipping ODS anti-join details due to empty/inaccessible external table for ' || vTableName || '.');
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
ELSE
RAISE;
END IF;
END;
LOOP
DECLARE
vStatus VARCHAR2(50);

View File

@@ -39,7 +39,7 @@ PROMPT - Match records by SOURCE_FILE_NAME against file$name in ODS tables
PROMPT - Skip configs with no NULL records or missing ODS tables
PROMPT
PROMPT Prerequisite: MARS-1409 installed (A_WORKFLOW_HISTORY_KEY column exists)
PROMPT Expected Duration: 1-5 minutes (depends on data volume)
PROMPT Expected Duration: 30-180 minutes (depends on data volume)
PROMPT ============================================================================
-- Confirm installation with user
@@ -53,6 +53,43 @@ END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT PREREQUISITE CHECK: Verifying MARS-1409 objects
PROMPT ============================================================================
WHENEVER SQLERROR EXIT SQL.SQLCODE
DECLARE
vColCount NUMBER;
vTableCount NUMBER;
BEGIN
SELECT COUNT(*)
INTO vColCount
FROM ALL_TAB_COLUMNS
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_SOURCE_FILE_RECEIVED'
AND COLUMN_NAME = 'A_WORKFLOW_HISTORY_KEY';
IF vColCount = 0 THEN
RAISE_APPLICATION_ERROR(-20001,
'Prerequisite failed: CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY not found. Install MARS-1409 first (or do not run POSTHOOK after rollback).');
END IF;
SELECT COUNT(*)
INTO vTableCount
FROM ALL_TABLES
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_WORKFLOW_HISTORY';
IF vTableCount = 0 THEN
RAISE_APPLICATION_ERROR(-20002,
'Prerequisite failed: CT_MRDS.A_WORKFLOW_HISTORY table not found.');
END IF;
DBMS_OUTPUT.PUT_LINE('OK: Prerequisites satisfied (MARS-1409 schema changes detected).');
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Backfill A_WORKFLOW_HISTORY_KEY for existing records

View File

@@ -35,7 +35,6 @@ PROMPT This will reverse all changes from MARS-1409-POSTHOOK installation.
PROMPT
PROMPT Rollback steps:
PROMPT 1. Clear A_WORKFLOW_HISTORY_KEY values from A_SOURCE_FILE_RECEIVED
PROMPT 2. Verify package versions
PROMPT ============================================================================
-- Confirm rollback with user
@@ -55,12 +54,6 @@ PROMPT STEP 1: Clear backfilled A_WORKFLOW_HISTORY_KEY values
PROMPT ============================================================================
@@91_MARS_1409_POSTHOOK_rollback_workflow_keys.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Verify Package Versions
PROMPT ============================================================================
@@verify_packages_version.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Rollback Complete

View File

@@ -1,45 +1,55 @@
-- ============================================================================
-- MARS-1409 Step 01: Add A_WORKFLOW_HISTORY_KEY column
-- MARS-1409 Step 01: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
-- ============================================================================
-- Purpose: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED table
-- Prerequisites: Table A_SOURCE_FILE_RECEIVED exists, A_WORKFLOW_HISTORY table exists
-- Purpose: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
-- using ALTER TABLE to preserve existing data.
-- Prerequisites: A_SOURCE_FILE_CONFIG table exists (FK dependency)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Adding A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED...
PROMPT Adding A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED...
DECLARE
vColumnExists NUMBER := 0;
BEGIN
SELECT COUNT(*) INTO vColumnExists
FROM ALL_TAB_COLUMNS
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_SOURCE_FILE_RECEIVED'
AND COLUMN_NAME = 'A_WORKFLOW_HISTORY_KEY';
IF vColumnExists > 0 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY already exists.');
RETURN;
END IF;
EXECUTE IMMEDIATE '
ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED ADD (
A_WORKFLOW_HISTORY_KEY NUMBER
)';
EXECUTE IMMEDIATE '
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY IS
''Direct link to workflow history - each file has exactly one workflow execution. Populated during VALIDATE_SOURCE_FILE_RECEIVED (MARS-1409)''';
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY_KEY column added successfully!');
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED ADD (A_WORKFLOW_HISTORY_KEY NUMBER)';
DBMS_OUTPUT.PUT_LINE('Column A_WORKFLOW_HISTORY_KEY added to A_SOURCE_FILE_RECEIVED.');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR adding column: ' || SQLERRM);
RAISE;
IF SQLCODE = -1430 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY already exists in A_SOURCE_FILE_RECEIVED.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Adding comment on A_WORKFLOW_HISTORY_KEY...
BEGIN
EXECUTE IMMEDIATE q'[COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY IS 'Direct link to workflow history - each file has exactly one workflow execution. Populated during VALIDATE_SOURCE_FILE_RECEIVED (MARS-1409)']';
DBMS_OUTPUT.PUT_LINE('Comment on A_WORKFLOW_HISTORY_KEY added.');
END;
/
PROMPT
PROMPT Renaming IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH in A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG RENAME COLUMN IS_KEEP_IN_TRASH TO IS_KEPT_IN_TRASH';
DBMS_OUTPUT.PUT_LINE('Column IS_KEEP_IN_TRASH renamed to IS_KEPT_IN_TRASH in A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_KEEP_IN_TRASH does not exist (already renamed or not present).');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Step 01 completed: A_WORKFLOW_HISTORY_KEY column added and IS_KEEP_IN_TRASH renamed to IS_KEPT_IN_TRASH.
PROMPT

View File

@@ -0,0 +1,104 @@
-- ============================================================================
-- MARS-1409 Step 10: Update A_TABLE_STAT, A_TABLE_STAT_HIST, A_SOURCE_FILE_CONFIG
-- ============================================================================
-- Purpose: Apply MARS-1409 table changes:
-- - A_TABLE_STAT and A_TABLE_STAT_HIST: DROP and recreate from new_version
-- (stats tables with no critical persistent data)
-- - A_SOURCE_FILE_CONFIG: ALTER TABLE ADD IS_WORKFLOW_SUCCESS_REQUIRED column
-- (preserves existing configuration data)
-- - A_SOURCE_FILE_RECEIVED: no changes in this step
-- Prerequisites: A_SOURCE table exists (FK parent of A_SOURCE_FILE_CONFIG)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT_HIST
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT_HIST...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT_HIST';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT_HIST dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT_HIST does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- ADD IS_WORKFLOW_SUCCESS_REQUIRED to A_SOURCE_FILE_CONFIG
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Adding IS_WORKFLOW_SUCCESS_REQUIRED to A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD ('
|| ' IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT ''Y'' NOT NULL '
|| ' CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN (''Y'', ''N''))'
|| ')';
DBMS_OUTPUT.PUT_LINE('Column IS_WORKFLOW_SUCCESS_REQUIRED added to A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -1430 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_WORKFLOW_SUCCESS_REQUIRED already exists in A_SOURCE_FILE_CONFIG.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Adding comment on IS_WORKFLOW_SUCCESS_REQUIRED...
BEGIN
EXECUTE IMMEDIATE q'[COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409']';
DBMS_OUTPUT.PUT_LINE('Comment on IS_WORKFLOW_SUCCESS_REQUIRED added.');
END;
/
-- ----------------------------------------------------------------------------
-- RECREATE A_TABLE_STAT and A_TABLE_STAT_HIST from new_version
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Creating A_TABLE_STAT (new_version)...
@@new_version/A_TABLE_STAT.sql
PROMPT
PROMPT Creating A_TABLE_STAT_HIST (new_version)...
@@new_version/A_TABLE_STAT_HIST.sql
PROMPT
PROMPT Step 10 completed: A_TABLE_STAT and A_TABLE_STAT_HIST recreated from new_version scripts,
PROMPT IS_WORKFLOW_SUCCESS_REQUIRED column added to A_SOURCE_FILE_CONFIG (MARS-1409).
PROMPT

View File

@@ -93,6 +93,31 @@ WHERE owner = 'CT_MRDS'
AND name = 'FILE_ARCHIVER'
ORDER BY type, line, position;
-- Check DATA_EXPORTER compilation status
PROMPT
PROMPT 5C. Checking DATA_EXPORTER package compilation...
SELECT
object_name,
object_type,
status,
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'DATA_EXPORTER'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
SELECT
name,
type,
line,
position,
text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name = 'DATA_EXPORTER'
ORDER BY type, line, position;
-- Check trigger status
PROMPT
PROMPT 5B. Checking A_WORKFLOW_HISTORY trigger...
@@ -112,7 +137,9 @@ SELECT 'FILE_MANAGER' AS PACKAGE_NAME, CT_MRDS.FILE_MANAGER.GET_VERSION() AS V
UNION ALL
SELECT 'ENV_MANAGER' AS PACKAGE_NAME, CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'DATA_EXPORTER' AS PACKAGE_NAME, CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
PROMPT
PROMPT ============================================================================

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Step 11: Install DATA_EXPORTER Package Specification
-- ============================================================================
-- Script: 11_MARS_1409_install_CT_MRDS_DATA_EXPORTER_SPEC.sql
-- Description: Install DATA_EXPORTER package specification (new version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Installing DATA_EXPORTER package specification...
PROMPT ============================================================================
@@new_version/DATA_EXPORTER.pkg
PROMPT DATA_EXPORTER specification installed
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Step 12: Install DATA_EXPORTER Package Body
-- ============================================================================
-- Script: 12_MARS_1409_install_CT_MRDS_DATA_EXPORTER_BODY.sql
-- Description: Install DATA_EXPORTER package body (new version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Installing DATA_EXPORTER package body...
PROMPT ============================================================================
@@new_version/DATA_EXPORTER.pkb
PROMPT DATA_EXPORTER body installed
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 83_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_SPEC.sql
-- Description: Restore DATA_EXPORTER package specification (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring DATA_EXPORTER package specification...
PROMPT ============================================================================
@@rollback_version/DATA_EXPORTER.pkg
PROMPT DATA_EXPORTER specification restored
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 84_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_BODY.sql
-- Description: Restore DATA_EXPORTER package body (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring DATA_EXPORTER package body...
PROMPT ============================================================================
@@rollback_version/DATA_EXPORTER.pkb
PROMPT DATA_EXPORTER body restored
/

View File

@@ -60,7 +60,7 @@ SELECT
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_name, object_type;
@@ -70,7 +70,7 @@ PROMPT 3. Checking for compilation errors...
SELECT name, type, line, position, text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
ORDER BY name, type, line, position;
-- Verify package versions
@@ -80,7 +80,9 @@ SELECT 'FILE_MANAGER' AS PACKAGE_NAME, CT_MRDS.FILE_MANAGER.GET_VERSION() AS V
UNION ALL
SELECT 'ENV_MANAGER' AS PACKAGE_NAME, CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'DATA_EXPORTER' AS PACKAGE_NAME, CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
PROMPT
PROMPT ============================================================================

View File

@@ -0,0 +1,92 @@
-- ============================================================================
-- MARS-1409 Rollback Step 100: Restore A_TABLE_STAT, A_TABLE_STAT_HIST,
-- remove IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG
-- ============================================================================
-- Purpose: Rollback of step 10:
-- - A_TABLE_STAT and A_TABLE_STAT_HIST: DROP and recreate from rollback_version
-- - A_SOURCE_FILE_CONFIG: ALTER TABLE DROP COLUMN IS_WORKFLOW_SUCCESS_REQUIRED
-- (preserves existing configuration data)
-- - A_SOURCE_FILE_RECEIVED: no changes in this step
-- Prerequisites: Step 10 was applied
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT_HIST
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT_HIST...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT_HIST';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT_HIST dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT_HIST does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP COLUMN IS_WORKFLOW_SUCCESS_REQUIRED';
DBMS_OUTPUT.PUT_LINE('Column IS_WORKFLOW_SUCCESS_REQUIRED dropped from A_SOURCE_FILE_CONFIG (CHK_IS_WORKFLOW_SUCCESS_REQUIRED constraint dropped automatically).');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_WORKFLOW_SUCCESS_REQUIRED does not exist in A_SOURCE_FILE_CONFIG.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- RECREATE A_TABLE_STAT and A_TABLE_STAT_HIST from rollback_version
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Creating A_TABLE_STAT (rollback_version - pre-MARS-1409 structure)...
@@rollback_version/A_TABLE_STAT.sql
PROMPT
PROMPT Creating A_TABLE_STAT_HIST (rollback_version - pre-MARS-1409 structure)...
@@rollback_version/A_TABLE_STAT_HIST.sql
PROMPT
PROMPT Rollback Step 100 completed: A_TABLE_STAT and A_TABLE_STAT_HIST restored to pre-MARS-1409
PROMPT structure, IS_WORKFLOW_SUCCESS_REQUIRED column removed from A_SOURCE_FILE_CONFIG.
PROMPT

View File

@@ -1,44 +1,46 @@
-- ============================================================================
-- MARS-1409 Rollback 91: Drop A_WORKFLOW_HISTORY_KEY column
-- MARS-1409 Rollback 99: Remove A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- ============================================================================
-- Purpose: Remove A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- Purpose: Drop A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- using ALTER TABLE to preserve existing data.
-- Prerequisites: A_SOURCE_FILE_CONFIG table exists (FK dependency)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Dropping A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED...
PROMPT Dropping A_WORKFLOW_HISTORY_KEY from A_SOURCE_FILE_RECEIVED...
DECLARE
vColumnExists NUMBER := 0;
BEGIN
SELECT COUNT(*) INTO vColumnExists
FROM ALL_TAB_COLUMNS
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_SOURCE_FILE_RECEIVED'
AND COLUMN_NAME = 'A_WORKFLOW_HISTORY_KEY';
IF vColumnExists = 0 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY does not exist (already dropped).');
RETURN;
END IF;
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED DROP COLUMN A_WORKFLOW_HISTORY_KEY';
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY_KEY column dropped successfully!');
-- Recompile packages invalidated by column drop
EXECUTE IMMEDIATE 'ALTER PACKAGE CT_MRDS.ENV_MANAGER COMPILE BODY';
EXECUTE IMMEDIATE 'ALTER PACKAGE CT_MRDS.FILE_MANAGER COMPILE';
EXECUTE IMMEDIATE 'ALTER PACKAGE CT_MRDS.FILE_MANAGER COMPILE BODY';
EXECUTE IMMEDIATE 'ALTER PACKAGE CT_MRDS.FILE_ARCHIVER COMPILE';
EXECUTE IMMEDIATE 'ALTER PACKAGE CT_MRDS.FILE_ARCHIVER COMPILE BODY';
DBMS_OUTPUT.PUT_LINE('Dependent packages recompiled.');
DBMS_OUTPUT.PUT_LINE('Column A_WORKFLOW_HISTORY_KEY dropped from A_SOURCE_FILE_RECEIVED.');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR dropping column: ' || SQLERRM);
RAISE;
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY does not exist in A_SOURCE_FILE_RECEIVED.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Renaming IS_KEPT_IN_TRASH back to IS_KEEP_IN_TRASH in A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG RENAME COLUMN IS_KEPT_IN_TRASH TO IS_KEEP_IN_TRASH';
DBMS_OUTPUT.PUT_LINE('Column IS_KEPT_IN_TRASH renamed back to IS_KEEP_IN_TRASH in A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_KEPT_IN_TRASH does not exist (already renamed back or not present).');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Rollback 99 completed: A_WORKFLOW_HISTORY_KEY removed and IS_KEPT_IN_TRASH renamed back to IS_KEEP_IN_TRASH.
PROMPT

View File

@@ -31,9 +31,9 @@ PROMPT =========================================================================
PROMPT MARS-1409 Installation Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_MANAGER v3.X.X
PROMPT Change: Add A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED
PROMPT Purpose: Direct tracking of workflow history keys in file registration
PROMPT Steps: 11 (DDL, ENV_MANAGER Update, FILE_MANAGER Update, FILE_ARCHIVER Update, Trigger Update, Verification, Tracking, Version Verification)
PROMPT Change: Add A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED; add ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED and WORKFLOW_SUCCESS_* columns to A_TABLE_STAT/HIST
PROMPT Purpose: Direct tracking of workflow history keys in file registration; self-documenting statistics records; separate total vs workflow-success statistics
PROMPT Steps: 14 (DDL x2, ENV_MANAGER Update, FILE_MANAGER Update, FILE_ARCHIVER Update, DATA_EXPORTER Update, Trigger Update, Verification, Tracking, Version Verification)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_start FROM DUAL;
PROMPT ============================================================================
@@ -56,61 +56,79 @@ PROMPT =========================================================================
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Update ENV_MANAGER package specification
PROMPT STEP 2: Add ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED and WORKFLOW_SUCCESS_FILE_COUNT/ROW_COUNT/SIZE columns to A_TABLE_STAT and A_TABLE_STAT_HIST
PROMPT ============================================================================
@@02_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
@@02_MARS_1409_add_archival_strategy_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 3: Update ENV_MANAGER package body
PROMPT STEP 3: Update ENV_MANAGER package specification
PROMPT ============================================================================
@@03_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql
@@03_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Update FILE_MANAGER package specification
PROMPT STEP 4: Update ENV_MANAGER package body
PROMPT ============================================================================
@@04_MARS_1409_install_CT_MRDS_FILE_MANAGER_SPEC.sql
@@04_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Update FILE_MANAGER package body
PROMPT STEP 5: Update FILE_MANAGER package specification
PROMPT ============================================================================
@@05_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
@@05_MARS_1409_install_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Update FILE_ARCHIVER package specification
PROMPT STEP 6: Update FILE_MANAGER package body
PROMPT ============================================================================
@@06_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
@@06_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Update FILE_ARCHIVER package body
PROMPT STEP 7: Update FILE_ARCHIVER package specification
PROMPT ============================================================================
@@07_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
@@07_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Update A_WORKFLOW_HISTORY trigger
PROMPT STEP 8: Update FILE_ARCHIVER package body
PROMPT ============================================================================
@@08_MARS_1409_install_CT_MRDS_A_WORKFLOW_HISTORY.sql
@@08_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Verify installation
PROMPT STEP 9: Install DATA_EXPORTER package specification
PROMPT ============================================================================
@@09_MARS_1409_verify_installation.sql
@@11_MARS_1409_install_CT_MRDS_DATA_EXPORTER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Track package versions
PROMPT STEP 10: Install DATA_EXPORTER package body
PROMPT ============================================================================
@@12_MARS_1409_install_CT_MRDS_DATA_EXPORTER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Update A_WORKFLOW_HISTORY trigger
PROMPT ============================================================================
@@09_MARS_1409_install_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 12: Verify installation
PROMPT ============================================================================
@@10_MARS_1409_verify_installation.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 13: Track package versions
PROMPT ============================================================================
@@track_package_versions.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Verify package versions
PROMPT STEP 14: Verify package versions
PROMPT ============================================================================
@@verify_packages_version.sql
@@ -125,3 +143,5 @@ PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,106 @@
-- ====================================================================
-- A_SOURCE_FILE_CONFIG Table
-- ====================================================================
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- MARS-1409: Added IS_WORKFLOW_SUCCESS_REQUIRED flag for workflow bypass
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_KEY VARCHAR2(30) NOT NULL ENABLE,
SOURCE_FILE_TYPE VARCHAR2(200), -- Can be 'INPUT' or 'CONTAINER' or 'LOAD_CONFIG'
SOURCE_FILE_ID VARCHAR2(200),
SOURCE_FILE_DESC VARCHAR2(2000),
SOURCE_FILE_NAME_PATTERN VARCHAR2(200),
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEPT_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT 'Y' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEPT_IN_TRASH CHECK (IS_KEPT_IN_TRASH IN ('Y', 'N')),
CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_CONFIG_UQ1 UNIQUE(SOURCE_FILE_TYPE, SOURCE_FILE_ID, TABLE_ID)
) TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS
'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,56 @@
-- ====================================================================
-- A_TABLE_STAT Table
-- ====================================================================
-- Purpose: Store current table statistics and archival thresholds
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT (
-- === Identity / metadata ===
A_TABLE_STAT_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
-- === Archival configuration snapshot (values at gather time) ===
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
-- === Total statistics (all files, no workflow filter) ===
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
TOTAL_SIZE NUMBER(38,0),
-- === Over-archival-threshold statistics ===
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_TOTAL_SIZE NUMBER(38,0),
-- === Workflow-success statistics (WORKFLOW_SUCCESSFUL='Y' files only) ===
WORKFLOW_SUCCESS_FILE_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_ROW_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_TOTAL_SIZE NUMBER(38,0),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
-- Identity / metadata
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.A_TABLE_STAT_KEY IS 'Primary key, populated from A_TABLE_STAT_KEY_SEQ sequence.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.A_SOURCE_FILE_CONFIG_KEY IS 'Foreign key to A_SOURCE_FILE_CONFIG; one current-stat row per config entry (unique constraint A_TABLE_STAT_UK1).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.TABLE_NAME IS 'Fully qualified ODS external table name (SCHEMA.TABLE) for which statistics were gathered.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.CREATED IS 'Timestamp when the statistics were gathered by FILE_ARCHIVER.GATHER_TABLE_STAT.';
-- Archival configuration snapshot
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCHIVAL_STRATEGY IS 'Archival strategy active when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months copied from A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_THRESHOLD_DAYS IS 'Archive threshold in days copied from A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS. Used by THRESHOLD_BASED and HYBRID strategies.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Snapshot of A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED at gather time. Y = OVER_ARCH_THRESOLD counts include only files with WORKFLOW_SUCCESSFUL=Y. Added MARS-1409.';
-- Total statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.FILE_COUNT IS 'Total number of files present in the ODS external table, regardless of workflow success status.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ROW_COUNT IS 'Total row count across all files in the ODS external table.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.TOTAL_SIZE IS 'Total size in bytes of all files in the ODS bucket location.';
-- Over-threshold statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_FILE_COUNT IS 'Number of files that satisfy the archival threshold condition. When IS_WORKFLOW_SUCCESS_REQUIRED=Y, also requires WORKFLOW_SUCCESSFUL=Y.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_ROW_COUNT IS 'Row count for files that satisfy the archival threshold condition.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_TOTAL_SIZE IS 'Size in bytes for files that satisfy the archival threshold condition.';
-- Workflow-success statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_FILE_COUNT IS 'Count of files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_ROW_COUNT IS 'Row count for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_TOTAL_SIZE IS 'Size in bytes for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
-- Note: A_TABLE_STAT_UK1 index is auto-created by the UNIQUE constraint definition above.

View File

@@ -0,0 +1,53 @@
-- ====================================================================
-- A_TABLE_STAT_HIST Table
-- ====================================================================
-- Purpose: Store historical table statistics for trend analysis
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
-- === Identity / metadata ===
A_TABLE_STAT_HIST_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
-- === Archival configuration snapshot (values at gather time) ===
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
-- === Total statistics (all files, no workflow filter) ===
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
TOTAL_SIZE NUMBER(38,0),
-- === Over-archival-threshold statistics ===
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_TOTAL_SIZE NUMBER(38,0),
-- === Workflow-success statistics (WORKFLOW_SUCCESSFUL='Y' files only) ===
WORKFLOW_SUCCESS_FILE_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_ROW_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_TOTAL_SIZE NUMBER(38,0)
) TABLESPACE "DATA";
-- Identity / metadata
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.A_TABLE_STAT_HIST_KEY IS 'Primary key, populated from A_TABLE_STAT_KEY_SEQ sequence (shared with A_TABLE_STAT).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.A_SOURCE_FILE_CONFIG_KEY IS 'Foreign key to A_SOURCE_FILE_CONFIG. Multiple history rows per config entry (no unique constraint).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.TABLE_NAME IS 'Fully qualified ODS external table name (SCHEMA.TABLE) for which statistics were gathered.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.CREATED IS 'Timestamp when the statistics snapshot was taken by FILE_ARCHIVER.GATHER_TABLE_STAT.';
-- Archival configuration snapshot
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCHIVAL_STRATEGY IS 'Archival strategy active when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months copied from A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_THRESHOLD_DAYS IS 'Archive threshold in days copied from A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS. Used by THRESHOLD_BASED and HYBRID strategies.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Snapshot of A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED at gather time. Y = OVER_ARCH_THRESOLD counts include only files with WORKFLOW_SUCCESSFUL=Y. Added MARS-1409.';
-- Total statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.FILE_COUNT IS 'Total number of files present in the ODS external table at gather time, regardless of workflow success status.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ROW_COUNT IS 'Total row count across all files in the ODS external table at gather time.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.TOTAL_SIZE IS 'Total size in bytes of all files in the ODS bucket location at gather time.';
-- Over-threshold statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_FILE_COUNT IS 'Number of files that satisfied the archival threshold condition. When IS_WORKFLOW_SUCCESS_REQUIRED=Y, also required WORKFLOW_SUCCESSFUL=Y.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_ROW_COUNT IS 'Row count for files that satisfied the archival threshold condition.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_TOTAL_SIZE IS 'Size in bytes for files that satisfied the archival threshold condition.';
-- Workflow-success statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_FILE_COUNT IS 'Count of files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_ROW_COUNT IS 'Row count for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_TOTAL_SIZE IS 'Size in bytes for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(19) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(50) := 'MRDS Development Team';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.6.3 (2026-01-28): COMPILATION FIX - Resolved ORA-00904 error in EXPORT_PARTITION_PARALLEL. SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE cannot be used directly in SQL UPDATE statements. Now properly assigned to vgMsgTmp variable before UPDATE.' || CHR(10) ||
'v2.6.2 (2026-01-28): CRITICAL FIX - Race condition when multiple exports run simultaneously. Changed DELETE to filter by age (>24h) instead of deleting all COMPLETED chunks. Prevents concurrent sessions from deleting each other chunks. Session-safe cleanup with TASK_NAME filtering. Enables true parallel execution of multiple export jobs.' || CHR(10) ||
'v2.6.1 (2026-01-28): Added DELETE_FAILED_EXPORT_FILE procedure to clean up partial/corrupted files before retry. When partition fails mid-export, partial file is deleted before retry to prevent Oracle from creating _1 suffixed duplicates. Ensures clean retry without orphaned files in OCI bucket.' || CHR(10) ||
'v2.6.0 (2026-01-28): CRITICAL FIX - Added STATUS tracking to A_PARALLEL_EXPORT_CHUNKS table to prevent data duplication on retry. System now restarts ONLY failed partitions instead of re-exporting all data. Added ERROR_MESSAGE and EXPORT_TIMESTAMP columns for better error handling and monitoring. Prevents duplicate file creation when parallel tasks fail (e.g., 22 partitions with 16 threads, 3 failures no longer duplicates 19 successful exports).' || CHR(10) ||
'v2.5.0 (2026-01-26): Added recorddelimiter parameter with CRLF (CHR(13)||CHR(10)) for CSV exports to ensure Windows-compatible line endings. Improves cross-platform compatibility when CSV files are opened in Windows applications (Notepad, Excel).' || CHR(10) ||
'v2.4.0 (2026-01-11): Added pTemplateTableName parameter for per-column date format configuration. Implements dynamic query building with TO_CHAR for each date/timestamp column using FILE_MANAGER.GET_DATE_FORMAT. Supports 3-tier hierarchy: column-specific, template DEFAULT, global fallback. Eliminates single dateformat limitation of DBMS_CLOUD.EXPORT_DATA.' || CHR(10) ||
'v2.3.0 (2025-12-20): Added parallel partition processing using DBMS_PARALLEL_EXECUTE. New pParallelDegree parameter (1-16, default 1) for EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures. Each year/month partition processed in separate thread for improved performance.' || CHR(10) ||
'v2.2.0 (2025-12-19): DRY refactoring - extracted shared helper functions (sanitizeFilename, VALIDATE_TABLE_AND_COLUMNS, GET_PARTITIONS, EXPORT_SINGLE_PARTITION worker procedure). Reduced code duplication by ~400 lines. Prepared architecture for v2.3.0 parallel processing.' || CHR(10) ||
'v2.1.1 (2025-12-04): Fixed JOIN column reference A_WORKFLOW_HISTORY_KEY -> A_ETL_LOAD_SET_KEY, added consistent column mapping and dynamic column list to EXPORT_TABLE_DATA procedure, enhanced DEBUG logging for all export operations' || CHR(10) ||
'v2.1.0 (2025-10-22): Added version tracking and PARTITION_YEAR/PARTITION_MONTH support' || CHR(10) ||
'v2.0.0 (2025-10-01): Separated export functionality from FILE_MANAGER package' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into CSV file on OCI infrustructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports'
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17'
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

View File

@@ -324,7 +324,7 @@ AS
ERR_UNKNOWN EXCEPTION;
CODE_UNKNOWN CONSTANT PLS_INTEGER := -20999;
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occured';
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occurred';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN
,CODE_UNKNOWN);

View File

@@ -58,14 +58,15 @@ AS
BEGIN
vParameters := CT_MRDS.ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST('pSourceFileConfigKey => '||nvl(to_char(pSourceFileConfigKey),NULL)));
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start','DEBUG', vParameters);
SELECT count(*) , min(SOURCE_FILE_TYPE)
-- LEFT JOIN ensures SOURCE_FILE_TYPE is retrieved from config even when no stats exist yet
SELECT count(s.A_SOURCE_FILE_CONFIG_KEY), min(c.SOURCE_FILE_TYPE)
INTO vCount, vSourceFileType
FROM CT_MRDS.A_TABLE_STAT s
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG c
FROM CT_MRDS.A_SOURCE_FILE_CONFIG c
LEFT JOIN CT_MRDS.A_TABLE_STAT s
ON s.A_SOURCE_FILE_CONFIG_KEY = c.A_SOURCE_FILE_CONFIG_KEY
WHERE s.A_SOURCE_FILE_CONFIG_KEY = pSourceFileConfigKey;
WHERE c.A_SOURCE_FILE_CONFIG_KEY = pSourceFileConfigKey;
IF vCount=0 and vSourceFileType='INPUT' THEN
IF vCount = 0 AND vSourceFileType = 'INPUT' THEN
GATHER_TABLE_STAT(pSourceFileConfigKey);
END IF;
@@ -74,9 +75,13 @@ AS
INTO vTableStat
FROM CT_MRDS.A_TABLE_STAT
WHERE A_SOURCE_FILE_CONFIG_KEY = pSourceFileConfigKey;
-- EXCEPTION
-- WHEN NO_DATA_FOUND THEN
--
EXCEPTION
WHEN NO_DATA_FOUND THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'No statistics found in A_TABLE_STAT for config key ' || pSourceFileConfigKey
|| ' (SOURCE_FILE_TYPE=' || NVL(vSourceFileType, 'NULL') || '). Cannot proceed with archival.',
'ERROR', vParameters);
RAISE;
END;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','DEBUG',vParameters);
@@ -120,8 +125,8 @@ AS
END IF;
-- Get TRASH policy from configuration
vKeepInTrash := (vSourceFileConfig.IS_KEEP_IN_TRASH = 'Y');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('TRASH policy from config: IS_KEEP_IN_TRASH=' || vSourceFileConfig.IS_KEEP_IN_TRASH, 'INFO', vParameters);
vKeepInTrash := (vSourceFileConfig.IS_KEPT_IN_TRASH = 'Y');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('TRASH policy from config: IS_KEPT_IN_TRASH=' || vSourceFileConfig.IS_KEPT_IN_TRASH, 'INFO', vParameters);
vTableStat := GET_TABLE_STAT(pSourceFileConfigKey => pSourceFileConfigKey);
@@ -139,18 +144,18 @@ AS
IF vSourceFileConfig.ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS' THEN
-- MINIMUM_AGE_MONTHS: Archive based on age only, ignore thresholds
vArchivalTriggeredBy := 'AGE_BASED';
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival strategy: MINIMUM_AGE_MONTHS (threshold-independent)','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival strategy: MINIMUM_AGE_MONTHS (threshold-independent)','INFO', vParameters);
ELSE
-- THRESHOLD_BASED and HYBRID: Check thresholds
if vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_FILES_COUNT then vArchivalTriggeredBy := 'FILES_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
else CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO');
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_TOTAL_SIZE >= vSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
else CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO', vParameters);
end if;
END IF;
if LENGTH(vArchivalTriggeredBy)>0 THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival Triggered By: '||vArchivalTriggeredBy,'INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival Triggered By: '||vArchivalTriggeredBy,'INFO', vParameters);
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSourceFileConfig.ODS_SCHEMA_NAME) || '.'||DBMS_ASSERT.simple_sql_name(vSourceFileConfig.TABLE_ID)||'_ODS';
-- Use strategy-based WHERE clause (MARS-828)
@@ -166,12 +171,37 @@ AS
join CT_MRDS.a_workflow_history h
on s.a_workflow_history_key = h.a_workflow_history_key
where ' || GET_ARCHIVAL_WHERE_CLAUSE(vSourceFileConfig) || '
and h.WORKFLOW_SUCCESSFUL = ''Y''
' || CASE WHEN vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED = 'Y' THEN 'and h.WORKFLOW_SUCCESSFUL = ''Y''' ELSE '' END || '
group by file$name, file$path, to_char(h.workflow_start,''yyyy''), to_char(h.workflow_start,''mm'')'
;
-- Get all files that will be archived into "vfiles" collection ("regular data files")
execute immediate vQuery bulk collect into vfiles;
-- MARS-1468: Handle ORA-29913/ORA-12801 - no files in ODS bucket (empty external table location)
-- ORA-29913 may come directly or wrapped in ORA-12801 (parallel query) with KUP-05002 root cause
BEGIN
execute immediate vQuery bulk collect into vfiles;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE IN (-29913, -12801) AND DBMS_UTILITY.FORMAT_ERROR_STACK LIKE '%KUP-05002%' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('No files found in ODS bucket (empty location, SQLCODE=' || SQLCODE || '). Nothing to archive.', 'INFO', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
RETURN;
ELSE
RAISE;
END IF;
END;
-- Check if any files match archival criteria
IF vfiles.COUNT = 0 THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'No files matching archival criteria found (strategy: ' || vSourceFileConfig.ARCHIVAL_STRATEGY
|| ', IS_WORKFLOW_SUCCESS_REQUIRED: ' || vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED || '). Nothing to archive.',
'INFO', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
RETURN;
END IF;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Files matching archival criteria: ' || vfiles.COUNT, 'INFO', vParameters);
-- Start EXPORT "regular data files" to parquet and DROP "csv"
FOR ym_loop IN (select distinct year, month from table(vfiles) order by 1,2) LOOP
@@ -187,12 +217,12 @@ AS
on s.a_workflow_history_key = h.a_workflow_history_key
and to_char(h.workflow_start,''yyyy'') = '''||ym_loop.year||'''
and to_char(h.workflow_start,''mm'') = '''||ym_loop.month||'''
and h.WORKFLOW_SUCCESSFUL = ''Y''
'|| CASE WHEN vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED = 'Y' THEN 'and h.WORKFLOW_SUCCESSFUL = ''Y''' ELSE '' END ||'
'
;
vUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE')||'ARCHIVE/'||vSourceFileConfig.A_SOURCE_KEY||'/'||vSourceFileConfig.TABLE_ID||'/PARTITION_YEAR='||ym_loop.year||'/PARTITION_MONTH='||ym_loop.month||'/';
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start Archiving for YEAR_MONTH: '||ym_loop.year||'_'||ym_loop.month ,'INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start Archiving for YEAR_MONTH: '||ym_loop.year||'_'||ym_loop.month ,'INFO', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Parameter for DBMS_CLOUD.EXPORT_DATA => file_uri_list' ,'DEBUG',vUri);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Parameter for DBMS_CLOUD.EXPORT_DATA => query' ,'DEBUG',vQuery);
@@ -214,7 +244,7 @@ AS
END;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vOperationId of export: '||vOperationId,'DEBUG');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vOperationId of export: '||vOperationId,'DEBUG', vParameters);
-- Get USER_LOAD_OPERATIONS info
select *
@@ -267,10 +297,10 @@ AS
target_object_uri => replace(f.pathname,'ODS','TRASH')||'/'||f.filename,
target_credential_name => ENV_MANAGER.gvCredentialName
);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File moved to TRASH folder.','DEBUG', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File moved to TRASH folder: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to move file to TRASH folder.','ERROR', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to move file to TRASH folder: '||f.pathname||'/'||f.filename,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
rollback;
vProcessControlStatus := 'MOVE_FILE_TO_TRASH_FAILURE';
@@ -288,7 +318,7 @@ AS
FOR f in (select filename, pathname from table(vfiles) where year = ym_loop.year and month = ym_loop.month) LOOP
DBMS_CLOUD.DELETE_OBJECT(credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName,
object_uri => replace(f.pathname,'ODS','TRASH')||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File dropped from TRASH folder.','DEBUG', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File dropped from TRASH folder: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
-- Update status to ARCHIVED_AND_PURGED
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED r
@@ -297,10 +327,10 @@ AS
AND r.source_file_name = f.filename
AND r.PROCESSING_STATUS = 'ARCHIVED_AND_TRASHED';
END LOOP;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('All archived files removed from TRASH folder and marked as ARCHIVED_AND_PURGED (config: IS_KEEP_IN_TRASH=N).','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('All archived files removed from TRASH folder and marked as ARCHIVED_AND_PURGED (config: IS_KEPT_IN_TRASH=N).','INFO', vParameters);
ELSE
-- Keep files in TRASH folder (status remains ARCHIVED_AND_TRASHED)
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archived files kept in TRASH folder for retention (config: IS_KEEP_IN_TRASH=Y, status: ARCHIVED_AND_TRASHED).','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archived files kept in TRASH folder for retention (config: IS_KEPT_IN_TRASH=Y, status: ARCHIVED_AND_TRASHED).','INFO', vParameters);
END IF;
--ROLLBACK PART
@@ -321,7 +351,7 @@ AS
target_object_uri => f.pathname||'/'||f.filename,
target_credential_name => ENV_MANAGER.gvCredentialName
);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH folder.','DEBUG', f.pathname||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH folder: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED r
SET PROCESSING_STATUS = 'INGESTED'
@@ -332,7 +362,7 @@ AS
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH folder.','ERROR', replace(f.pathname,'ODS','TRASH')||'/'||f.filename);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH folder: '||replace(f.pathname,'ODS','TRASH')||'/'||f.filename,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
vProcessControlStatus := 'RESTORE_FILE_FROM_TRASH_FAILURE';
END;
@@ -368,12 +398,12 @@ AS
credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName,
object_uri => vFilename || arch_file.object_name
);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival PARQUET file dropped.','DEBUG', vFilename || arch_file.object_name);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival PARQUET file dropped: '||vFilename || arch_file.object_name,'DEBUG', vParameters);
END LOOP;
RAISE_APPLICATION_ERROR(CT_MRDS.ENV_MANAGER.CODE_CHANGE_STAT_TO_ARCHIVED_FAILED, CT_MRDS.ENV_MANAGER.MSG_CHANGE_STAT_TO_ARCHIVED_FAILED);
ELSIF vProcessControlStatus = 'RESTORE_FILE_FROM_TRASH_FAILURE' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Some files were not restored from TRASH. Check A_PROCESS_LOG table for details','ERROR');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Some files were not restored from TRASH. Check A_PROCESS_LOG table for details','ERROR', vParameters);
RAISE_APPLICATION_ERROR(CT_MRDS.ENV_MANAGER.CODE_RESTORE_FILE_FROM_TRASH, CT_MRDS.ENV_MANAGER.MSG_RESTORE_FILE_FROM_TRASH);
END IF;
@@ -438,30 +468,52 @@ AS
vSourceFileConfig CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE;
vTableName VARCHAR2(200);
vQuery VARCHAR2(32000);
vWhereClause VARCHAR2(4000);
vOdsBucketUri VARCHAR2(1000);
vWhereClause VARCHAR2(4000);
vOverThresholdWhereClause VARCHAR2(4000);
vOdsBucketUri VARCHAR2(1000);
BEGIN
vParameters := CT_MRDS.ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST('pSourceFileConfigKey => '||nvl(to_char(pSourceFileConfigKey), 'NULL')));
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
vSourceFileConfig := CT_MRDS.FILE_MANAGER.GET_SOURCE_FILE_CONFIG(pSourceFileConfigKey => pSourceFileConfigKey);
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSourceFileConfig.ODS_SCHEMA_NAME) || '.'||DBMS_ASSERT.simple_sql_name(vSourceFileConfig.TABLE_ID)||'_ODS';
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vTableName','DEBUG',vTableName);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vTableName = '||vTableName, 'DEBUG', vParameters);
-- Get WHERE clause based on archival strategy (MARS-828)
vWhereClause := GET_ARCHIVAL_WHERE_CLAUSE(vSourceFileConfig);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vWhereClause','DEBUG',vWhereClause);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vWhereClause = '||vWhereClause, 'DEBUG', vParameters);
-- Get ODS bucket URI before building query
vOdsBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ODS') || 'ODS/' || vSourceFileConfig.A_SOURCE_KEY || '/' || vSourceFileConfig.TABLE_ID || '/';
-- Build WHERE clause for OVER_ARCH_THRESOLD columns:
-- Combines archival strategy time-condition with optional workflow success filter.
-- IS_WORKFLOW_SUCCESS_REQUIRED='Y': only files with WORKFLOW_SUCCESSFUL='Y' are counted as eligible.
-- IS_WORKFLOW_SUCCESS_REQUIRED='N': all files passing the time-condition are counted as eligible.
IF vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED = 'Y' THEN
vOverThresholdWhereClause := vWhereClause || ' AND workflow_successful = ''Y''';
ELSE
vOverThresholdWhereClause := vWhereClause;
END IF;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vOverThresholdWhereClause = '||vOverThresholdWhereClause, 'DEBUG', vParameters);
-- Use strategy-based WHERE clause for statistics (MARS-828)
-- FILE_COUNT, ROW_COUNT, TOTAL_SIZE: all files regardless of workflow success (never zero due to workflow filter)
-- OVER_ARCH_THRESOLD_*: IS_WORKFLOW_SUCCESS_REQUIRED-aware count of eligible files
-- WORKFLOW_SUCCESS_*: informational count of files with WORKFLOW_SUCCESSFUL='Y'
-- Column order MUST match A_TABLE_STAT column definition order for positional INTO vStats to work:
-- 1:A_TABLE_STAT_KEY, 2:A_SOURCE_FILE_CONFIG_KEY, 3:TABLE_NAME, 4:CREATED, 5:ARCHIVAL_STRATEGY,
-- 6:ARCH_MINIMUM_AGE_MONTHS, 7:ARCH_THRESHOLD_DAYS, 8:IS_WORKFLOW_SUCCESS_REQUIRED,
-- 9:FILE_COUNT, 10:ROW_COUNT, 11:TOTAL_SIZE,
-- 12:OVER_ARCH_THRESOLD_FILE_COUNT, 13:OVER_ARCH_THRESOLD_ROW_COUNT, 14:OVER_ARCH_THRESOLD_TOTAL_SIZE,
-- 15:WORKFLOW_SUCCESS_FILE_COUNT, 16:WORKFLOW_SUCCESS_ROW_COUNT, 17:WORKFLOW_SUCCESS_TOTAL_SIZE
vQuery :=
'with tmp as (
select
s.*
,file$name as filename
,h.workflow_start
,h.workflow_successful
, to_char(h.workflow_start,''yyyy'') as year
, to_char(h.workflow_start,''mm'') as month
from '||vTableName||' s
@@ -470,22 +522,31 @@ AS
)
, tmp_gr as (
select
filename, count(*) as row_count_per_file, min(workflow_start) as workflow_start
filename
,count(*) as row_count_per_file
,min(workflow_start) as workflow_start
,max(workflow_successful) as workflow_successful
from tmp
group by filename
)
select
NULL as A_TABLE_STAT_KEY
,'||pSourceFileConfigKey||' as A_SOURCE_FILE_CONFIG_KEY
,'''||vTableName||''' as TABLE_NAME
,count(*) as FILE_COUNT
,sum(case when ' || vWhereClause || ' then 1 else 0 end) as OLD_FILE_COUNT
,sum (row_count_per_file) as ROW_COUNT
,sum(case when ' || vWhereClause || ' then row_count_per_file else 0 end) as OLD_ROW_COUNT
,sum(r.bytes) as BYTES
,sum(case when ' || vWhereClause || ' then r.bytes else 0 end) as OLD_BYTES
,'||COALESCE(TO_CHAR(vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS), 'NULL')||' as ARCHIVE_THRESHOLD_DAYS
,systimestamp as CREATED
NULL as A_TABLE_STAT_KEY
,'||pSourceFileConfigKey||' as A_SOURCE_FILE_CONFIG_KEY
,'''||vTableName||''' as TABLE_NAME
,systimestamp as CREATED
,'''||vSourceFileConfig.ARCHIVAL_STRATEGY||''' as ARCHIVAL_STRATEGY
,'||COALESCE(TO_CHAR(vSourceFileConfig.MINIMUM_AGE_MONTHS), 'NULL')||' as ARCH_MINIMUM_AGE_MONTHS
,'||COALESCE(TO_CHAR(vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS), 'NULL')||' as ARCH_THRESHOLD_DAYS
,'''||vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED||''' as IS_WORKFLOW_SUCCESS_REQUIRED
,count(*) as FILE_COUNT
,nvl(sum(row_count_per_file), 0) as ROW_COUNT
,nvl(sum(r.bytes), 0) as TOTAL_SIZE
,nvl(sum(case when ' || vOverThresholdWhereClause || ' then 1 else 0 end), 0) as OVER_ARCH_THRESOLD_FILE_COUNT
,nvl(sum(case when ' || vOverThresholdWhereClause || ' then row_count_per_file else 0 end), 0) as OVER_ARCH_THRESOLD_ROW_COUNT
,nvl(sum(case when ' || vOverThresholdWhereClause || ' then r.bytes else 0 end), 0) as OVER_ARCH_THRESOLD_TOTAL_SIZE
,nvl(sum(case when workflow_successful = ''Y'' then 1 else 0 end), 0) as WORKFLOW_SUCCESS_FILE_COUNT
,nvl(sum(case when workflow_successful = ''Y'' then row_count_per_file else 0 end), 0) as WORKFLOW_SUCCESS_ROW_COUNT
,nvl(sum(case when workflow_successful = ''Y'' then r.bytes else 0 end), 0) as WORKFLOW_SUCCESS_TOTAL_SIZE
from tmp_gr t
join (SELECT * from DBMS_CLOUD.LIST_OBJECTS(
credential_name => '''||CT_MRDS.ENV_MANAGER.gvCredentialName||''',
@@ -494,8 +555,35 @@ AS
) r
on t.filename = r.object_name'
;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vQuery','DEBUG',vQuery);
execute immediate vQuery into vStats;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('vQuery:', 'DEBUG', vQuery);
-- MARS-1468: Handle ORA-29913/ORA-12801 - no files in ODS bucket (empty external table location)
-- ORA-29913 may come directly or wrapped in ORA-12801 (parallel query) with KUP-05002 root cause
BEGIN
execute immediate vQuery into vStats;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE IN (-29913, -12801) AND DBMS_UTILITY.FORMAT_ERROR_STACK LIKE '%KUP-05002%' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('No files found in ODS bucket (empty location, SQLCODE=' || SQLCODE || '). Saving zero statistics.', 'INFO', vParameters);
vStats.A_SOURCE_FILE_CONFIG_KEY := pSourceFileConfigKey;
vStats.TABLE_NAME := vTableName;
vStats.FILE_COUNT := 0;
vStats.OVER_ARCH_THRESOLD_FILE_COUNT := 0;
vStats.ROW_COUNT := 0;
vStats.OVER_ARCH_THRESOLD_ROW_COUNT := 0;
vStats.TOTAL_SIZE := 0;
vStats.OVER_ARCH_THRESOLD_TOTAL_SIZE := 0;
vStats.ARCH_THRESHOLD_DAYS := vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS;
vStats.CREATED := SYSTIMESTAMP;
vStats.ARCHIVAL_STRATEGY := vSourceFileConfig.ARCHIVAL_STRATEGY;
vStats.ARCH_MINIMUM_AGE_MONTHS := vSourceFileConfig.MINIMUM_AGE_MONTHS;
vStats.IS_WORKFLOW_SUCCESS_REQUIRED := vSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED;
vStats.WORKFLOW_SUCCESS_FILE_COUNT := 0;
vStats.WORKFLOW_SUCCESS_ROW_COUNT := 0;
vStats.WORKFLOW_SUCCESS_TOTAL_SIZE := 0;
ELSE
RAISE;
END IF;
END;
vStats.A_TABLE_STAT_KEY := CT_MRDS.A_TABLE_STAT_KEY_SEQ.NEXTVAL;
insert into CT_MRDS.A_TABLE_STAT_HIST values vStats;
@@ -611,10 +699,10 @@ AS
target_credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName
);
vFilesRestored := vFilesRestored + 1;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH','DEBUG', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH: '||file_rec.SOURCE_FILE_NAME,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH','ERROR', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH: '||file_rec.SOURCE_FILE_NAME,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
END;
END LOOP;
@@ -651,10 +739,10 @@ AS
target_credential_name => CT_MRDS.ENV_MANAGER.gvCredentialName
);
vFilesRestored := vFilesRestored + 1;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH','DEBUG', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH: '||file_rec.SOURCE_FILE_NAME,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH','ERROR', file_rec.SOURCE_FILE_NAME);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH: '||file_rec.SOURCE_FILE_NAME,'ERROR', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(CT_MRDS.ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
END;
END LOOP;
@@ -1043,7 +1131,7 @@ AS
A_SOURCE_FILE_CONFIG_KEY,
TABLE_ID,
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH,
IS_KEPT_IN_TRASH,
A_SOURCE_KEY
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
@@ -1068,7 +1156,7 @@ AS
ELSE
BEGIN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'Archiving table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', IS_KEEP_IN_TRASH=' || config_rec.IS_KEEP_IN_TRASH || ']',
'Archiving table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', IS_KEPT_IN_TRASH=' || config_rec.IS_KEPT_IN_TRASH || ']',
'INFO'
);

View File

@@ -17,13 +17,17 @@ AS
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.3.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-11 12:00:00';
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.4.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-17 11:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEEP_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.4.0 (2026-03-17): MARS-1409 - Added IS_WORKFLOW_SUCCESS_REQUIRED flag to A_SOURCE_FILE_CONFIG (DEFAULT Y). ' ||
'Y=standard DBT flow (WORKFLOW_SUCCESSFUL=Y required), N=bypass for manual/non-DBT sources. ' ||
'Flag value stored in A_TABLE_STAT and A_TABLE_STAT_HIST for full audit of statistics basis.' || CHR(13)||CHR(10) ||
'3.3.1 (2026-03-13): Fixed ORA-29913 handling in ARCHIVE_TABLE_DATA (graceful RETURN when ODS bucket is empty) and GATHER_TABLE_STAT (saves zero statistics instead of raising error)' || CHR(13)||CHR(10) ||
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEPT_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.2.1 (2026-02-10): Fixed status update - ARCHIVED → ARCHIVED_AND_TRASHED when moving files to TRASH folder (critical bug fix)' || CHR(13)||CHR(10) ||
'3.2.0 (2026-02-06): Added pKeepInTrash parameter (DEFAULT TRUE) to ARCHIVE_TABLE_DATA for TRASH folder retention control - files kept in TRASH subfolder (DATA bucket) by default for safety and compliance' || CHR(13)||CHR(10) ||
'3.1.2 (2026-02-06): Fixed missing PARTITION_YEAR/PARTITION_MONTH assignments in UPDATE statement and export query circular dependency (now filters by workflow_start instead of partition fields)' || CHR(13)||CHR(10) ||
@@ -51,7 +55,7 @@ AS
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data from table specified by pSourceFileConfigKey(A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY) into PARQUET file on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
**/
PROCEDURE ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
@@ -62,7 +66,7 @@ AS
* @desc Function wrapper for ARCHIVE_TABLE_DATA procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_TABLE_DATA procedure and captures execution result.
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_TABLE_DATA(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
@@ -137,7 +141,7 @@ AS
* @name ARCHIVE_ALL
* @desc Multi-level batch archival procedure with three granularity levels.
* Only processes configurations where IS_ARCHIVE_ENABLED='Y'.
* TRASH policy for each table is controlled by individual IS_KEEP_IN_TRASH column.
* TRASH policy for each table is controlled by individual IS_KEPT_IN_TRASH column.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (e.g., 'LM', 'C2D') (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)

View File

@@ -304,7 +304,6 @@ AS
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_FILE_ALREADY_REGISTERED, vgMsgTmp);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -331,7 +330,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -549,7 +547,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MISSING_COLUMN_DATE_FORMAT, ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -912,7 +909,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END DROP_EXTERNAL_TABLE;
@@ -953,7 +949,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
END COPY_FILE;
@@ -1012,7 +1007,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_NO_CONFIG_FOR_RECEIVED_FILE, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_NO_CONFIG_FOR_RECEIVED_FILE, ENV_MANAGER.MSG_NO_CONFIG_FOR_RECEIVED_FILE);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
END MOVE_FILE;
@@ -1226,7 +1220,6 @@ AS
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -1442,7 +1435,6 @@ AS
-- ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT, 'ERROR', vParameters);
-- RAISE_ERROR(ENV_MANAGER.CODE_MISSING_COLUMN_DATE_FORMAT);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END GENERATE_EXTERNAL_TABLE_PARAMS;
@@ -1462,7 +1454,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_DUPLICATED_SOURCE_KEY, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_DUPLICATED_SOURCE_KEY, ENV_MANAGER.MSG_DUPLICATED_SOURCE_KEY);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END ADD_SOURCE;
@@ -1549,7 +1540,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END DELETE_SOURCE_CASCADE;
@@ -1583,7 +1573,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MULTIPLE_CONTAINER_ENTRIES, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MULTIPLE_CONTAINER_ENTRIES, ENV_MANAGER.MSG_MULTIPLE_CONTAINER_ENTRIES);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -1626,7 +1615,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MULTIPLE_MATCH_FOR_SRCFILE, vgMsgTmp);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -1643,7 +1631,12 @@ AS
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049
,pIsWorkflowSuccessRequired IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED%TYPE DEFAULT 'Y' -- MARS-1409
,pIsArchiveEnabled IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED%TYPE DEFAULT 'N' -- MARS-828
,pIsKeptInTrash IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH%TYPE DEFAULT 'Y' -- MARS-828
,pArchivalStrategy IN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY%TYPE DEFAULT 'THRESHOLD_BASED' -- MARS-828
,pMinimumAgeMonths IN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS%TYPE DEFAULT 0 -- MARS-828
) IS
vSourceFileConfigKey PLS_INTEGER;
vSourceKeyExists PLS_INTEGER := 0;
@@ -1659,10 +1652,15 @@ AS
,'pTemplateTableName => '''||nvl(to_char(pTemplateTableName), 'NULL')||''''
,'pContainerFileKey => '''||nvl(to_char(pContainerFileKey), 'NULL')||''''
,'pEncoding => '''||nvl(to_char(pEncoding), 'NULL')||'''' -- MARS-1049: NOWY
,'pIsWorkflowSuccessRequired => '''||nvl(to_char(pIsWorkflowSuccessRequired), 'NULL')||'''' -- MARS-1409
,'pIsArchiveEnabled => '''||nvl(to_char(pIsArchiveEnabled), 'NULL')||'''' -- MARS-828
,'pIsKeptInTrash => '''||nvl(to_char(pIsKeptInTrash), 'NULL')||'''' -- MARS-828
,'pArchivalStrategy => '''||nvl(to_char(pArchivalStrategy), 'NULL')||'''' -- MARS-828
,'pMinimumAgeMonths => '''||nvl(to_char(pMinimumAgeMonths), 'NULL')||'''' -- MARS-828
));
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
INSERT INTO CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_KEY, SOURCE_FILE_TYPE, SOURCE_FILE_ID, SOURCE_FILE_DESC, SOURCE_FILE_NAME_PATTERN, TABLE_ID, TEMPLATE_TABLE_NAME, CONTAINER_FILE_KEY, ENCODING)
VALUES (pSourceKey, pSourceFileType, pSourceFileId, pSourceFileDesc, pSourceFileNamePattern, pTableId, pTemplateTableName, pContainerFileKey, pEncoding);
INSERT INTO CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_KEY, SOURCE_FILE_TYPE, SOURCE_FILE_ID, SOURCE_FILE_DESC, SOURCE_FILE_NAME_PATTERN, TABLE_ID, TEMPLATE_TABLE_NAME, CONTAINER_FILE_KEY, ENCODING, IS_WORKFLOW_SUCCESS_REQUIRED, IS_ARCHIVE_ENABLED, IS_KEPT_IN_TRASH, ARCHIVAL_STRATEGY, MINIMUM_AGE_MONTHS)
VALUES (pSourceKey, pSourceFileType, pSourceFileId, pSourceFileDesc, pSourceFileNamePattern, pTableId, pTemplateTableName, pContainerFileKey, pEncoding, pIsWorkflowSuccessRequired, pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths);
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
EXCEPTION
@@ -1675,7 +1673,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_SOURCE_KEY, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MISSING_SOURCE_KEY, ENV_MANAGER.MSG_MISSING_SOURCE_KEY);
ELSE
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END IF;
@@ -1704,7 +1701,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT('End','DEBUG',vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END ADD_COLUMN_DATE_FORMAT;
@@ -1772,6 +1768,12 @@ AS
||cgBL||pLevel||'ARCHIVE_THRESHOLD_BYTES_SUM = '||pSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM
||cgBL||pLevel||'ARCHIVE_THRESHOLD_ROWS_COUNT = '||pSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT
||cgBL||pLevel||'HOURS_TO_EXPIRE_STATISTICS = '||pSourceFileConfig.HOURS_TO_EXPIRE_STATISTICS
||cgBL||pLevel||'ENCODING = '||pSourceFileConfig.ENCODING
||cgBL||pLevel||'IS_ARCHIVE_ENABLED = '||pSourceFileConfig.IS_ARCHIVE_ENABLED
||cgBL||pLevel||'IS_KEPT_IN_TRASH = '||pSourceFileConfig.IS_KEPT_IN_TRASH
||cgBL||pLevel||'ARCHIVAL_STRATEGY = '||pSourceFileConfig.ARCHIVAL_STRATEGY
||cgBL||pLevel||'MINIMUM_AGE_MONTHS = '||pSourceFileConfig.MINIMUM_AGE_MONTHS
||cgBL||pLevel||'IS_WORKFLOW_SUCCESS_REQUIRED = '||pSourceFileConfig.IS_WORKFLOW_SUCCESS_REQUIRED
||cgBL||pLevel||''||'--------------------------------'
;

View File

@@ -17,12 +17,14 @@ AS
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.6.1';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-13 09:00:00';
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.6.3';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-17 12:30:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.6.3 (2026-03-17): MARS-828 - Added pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths to ADD_SOURCE_FILE_CONFIG; FORMAT_CONFIG now shows all A_SOURCE_FILE_CONFIG columns' || CHR(13)||CHR(10) ||
'3.6.2 (2026-03-17): MARS-1409 - Added pIsWorkflowSuccessRequired parameter to ADD_SOURCE_FILE_CONFIG; IS_WORKFLOW_SUCCESS_REQUIRED shown in GET_DET_SOURCE_FILE_CONFIG_INFO output' || CHR(13)||CHR(10) ||
'3.6.1 (2026-03-13): MARS-1468 - Fixed CHAR/NCHAR/NVARCHAR2 column definitions in GENERATE_EXTERNAL_TABLE_PARAMS: CHAR now uses char_used/char_length semantics; NCHAR/NVARCHAR2 use char_length (data_length stores bytes in AL16UTF16)' || CHR(13)||CHR(10) ||
'3.6.0 (2026-02-27): MARS-1409 - Added A_WORKFLOW_HISTORY_KEY tracking in A_SOURCE_FILE_RECEIVED. Each file now stores its workflow execution key extracted during VALIDATE_SOURCE_FILE_RECEIVED' || CHR(13)||CHR(10) ||
'3.5.1 (2026-02-24): Fixed TIMESTAMP field syntax in GENERATE_EXTERNAL_TABLE_PARAMS for SQL*Loader compatibility (CHAR(35) DATE_FORMAT TIMESTAMP MASK format)' || CHR(13)||CHR(10) ||
@@ -443,13 +445,23 @@ AS
* @name ADD_SOURCE_FILE_CONFIG
* @desc Insert a new record to A_SOURCE_FILE_CONFIG table.
* MARS-1049: Added pEncoding parameter for CSV character set specification.
* MARS-1409: Added pIsWorkflowSuccessRequired parameter.
* MARS-828: Added pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths.
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252', 'EE8ISO8859P2')
* If NULL, no CHARACTERSET clause is added to external table definitions
* @param pIsWorkflowSuccessRequired - 'Y' (default) = archivization requires WORKFLOW_SUCCESSFUL='Y' (standard DBT flow)
* 'N' = archive regardless of workflow status (bypass for manual/non-DBT sources)
* @param pIsArchiveEnabled - 'Y' = enable automatic archivization for this config; 'N' (default) = disabled
* @param pIsKeptInTrash - 'Y' = move files to trash before purge; 'N' (default) = purge directly
* @param pArchivalStrategy - Archival strategy: 'MINIMUM_AGE_MONTHS' or NULL
* @param pMinimumAgeMonths - Minimum age in months before file eligible for archivization (used with MINIMUM_AGE_MONTHS strategy)
* @example CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
* pSourceKey => 'C2D', pSourceFileType => 'INPUT',
* pSourceFileId => 'UC_DISSEM', pTableId => 'METADATA_LOADS',
* pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
* pEncoding => 'UTF8'
* pEncoding => 'UTF8', pIsWorkflowSuccessRequired => 'Y',
* pIsArchiveEnabled => 'Y', pIsKeptInTrash => 'N',
* pArchivalStrategy => 'MINIMUM_AGE_MONTHS', pMinimumAgeMonths => 3
* );
**/
PROCEDURE ADD_SOURCE_FILE_CONFIG (
@@ -461,7 +473,12 @@ AS
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049
,pIsWorkflowSuccessRequired IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED%TYPE DEFAULT 'Y' -- MARS-1409
,pIsArchiveEnabled IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED%TYPE DEFAULT 'N' -- MARS-828
,pIsKeptInTrash IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH%TYPE DEFAULT 'Y' -- MARS-828
,pArchivalStrategy IN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY%TYPE DEFAULT 'THRESHOLD_BASED' -- MARS-828
,pMinimumAgeMonths IN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS%TYPE DEFAULT 0 -- MARS-828
);

View File

@@ -1,44 +0,0 @@
CREATE OR REPLACE EDITIONABLE TRIGGER "CT_MRDS"."A_WORKFLOW_HISTORY"
AFTER INSERT OR UPDATE OF workflow_successful ON ct_mrds.a_workflow_history
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
IF :new.service_name = 'ODS' AND :new.workflow_name IN (
'w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL',
'w_ODS_TMS_LIMIT_ACCESS', 'w_ODS_TMS_PORTFOLIO_ACCESS', 'w_ODS_TMS_PORTFOLIO_TREE',
'w_ODS_TMS_COLLATERAL_INVENTORY', 'w_ODS_TOP_FULLBIDARRAY_COMPILED', 'w_ODS_TOP_ANNOUNCEMENT',
'w_ODS_TOP_ALLOTMENT_MODIFICATIONS', 'w_ODS_TOP_ALLOTMENT', 'w_ODS_CEPH_PRICING', 'w_ODS_C2D_MPEC'
) THEN
IF :new.workflow_successful = 'Y' AND :new.workflow_successful <> NVL(:old.workflow_successful, 'N') THEN
CASE
WHEN :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
WHEN :new.workflow_name = 'w_ODS_TMS_LIMIT_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_LIMITACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_TREE' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOTREE';
WHEN :new.workflow_name = 'w_ODS_TMS_COLLATERAL_INVENTORY' THEN v_workflow_name := 'w_ODS_TMS_RAR_RARCOLLATERALINVENTORY';
WHEN :new.workflow_name = 'w_ODS_TOP_FULLBIDARRAY_COMPILED' THEN v_workflow_name := 'w_ODS_TOP_FULLBIDARRAY_COMPILED';
WHEN :new.workflow_name = 'w_ODS_TOP_ANNOUNCEMENT' THEN v_workflow_name := 'w_ODS_TOP_ANNOUNCEMENT';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT';
WHEN :new.workflow_name = 'w_ODS_CEPH_PRICING' THEN v_workflow_name := 'w_ODS_CEPH_PRICING';
WHEN :new.workflow_name = 'w_ODS_C2D_MPEC' THEN v_workflow_name := 'w_ODS_C2D_MPEC';
ELSE
v_workflow_name := :new.workflow_name;
END CASE;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION WHEN OTHERS THEN NULL;
END;
INSERT INTO ct_ods.a_load_history (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end, exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end, NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
END
;
/

View File

@@ -32,7 +32,7 @@ PROMPT MARS-1409 Rollback Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_MANAGER
PROMPT Change: Remove A_WORKFLOW_HISTORY_KEY column and restore previous version
PROMPT Steps: 10 (Restore FILE_ARCHIVER, Restore FILE_MANAGER, Restore ENV_MANAGER, Restore trigger, Clear data, Drop column, Verify)
PROMPT Steps: 13 (Drop tables/columns first, then Restore ENV_MANAGER, FILE_MANAGER, DATA_EXPORTER, FILE_ARCHIVER (dependency order), Restore trigger, Verify)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_start FROM DUAL;
PROMPT ============================================================================
@@ -49,61 +49,80 @@ END;
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Restore FILE_ARCHIVER package specification (previous version)
PROMPT STEP 1: Drop A_TABLE_STAT, A_TABLE_STAT_HIST and IS_WORKFLOW_SUCCESS_REQUIRED column
PROMPT (must be done BEFORE compiling rollback packages so column names match)
PROMPT ============================================================================
@@91_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_SPEC.sql
@@98_MARS_1409_rollback_archival_strategy_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Restore FILE_ARCHIVER package body (previous version)
PROMPT ============================================================================
@@92_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 3: Restore FILE_MANAGER package specification (previous version)
PROMPT ============================================================================
@@93_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Restore FILE_MANAGER package body (previous version)
PROMPT ============================================================================
@@94_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Restore ENV_MANAGER package specification (previous version)
PROMPT ============================================================================
@@95_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Restore ENV_MANAGER package body (previous version)
PROMPT ============================================================================
@@96_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Restore A_WORKFLOW_HISTORY trigger (previous version)
PROMPT ============================================================================
@@97_MARS_1409_rollback_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Drop A_WORKFLOW_HISTORY_KEY column
PROMPT STEP 2: Drop A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
PROMPT ============================================================================
@@99_MARS_1409_rollback_workflow_history_key_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Verify rollback
PROMPT STEP 3: Restore ENV_MANAGER package specification (previous version)
PROMPT ============================================================================
@@95_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Restore ENV_MANAGER package body (previous version)
PROMPT ============================================================================
@@96_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Restore FILE_MANAGER package specification (previous version)
PROMPT ============================================================================
@@93_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Restore FILE_MANAGER package body (previous version)
PROMPT ============================================================================
@@94_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Restore DATA_EXPORTER package specification (previous version)
PROMPT ============================================================================
@@83_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Restore DATA_EXPORTER package body (previous version)
PROMPT ============================================================================
@@84_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Restore FILE_ARCHIVER package specification (previous version)
PROMPT ============================================================================
@@91_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Restore FILE_ARCHIVER package body (previous version)
PROMPT ============================================================================
@@92_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Restore A_WORKFLOW_HISTORY trigger (previous version)
PROMPT ============================================================================
@@97_MARS_1409_rollback_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 12: Verify rollback
PROMPT ============================================================================
@@90_MARS_1409_verify_rollback.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Verify package versions
PROMPT STEP 13: Verify package versions
PROMPT ============================================================================
@@verify_packages_version.sql
@@ -118,3 +137,5 @@ PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,101 @@
-- ====================================================================
-- A_SOURCE_FILE_CONFIG Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- NOTE: IS_WORKFLOW_SUCCESS_REQUIRED column NOT included (added by MARS-1409)
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_KEY VARCHAR2(30) NOT NULL ENABLE,
SOURCE_FILE_TYPE VARCHAR2(200), -- Can be 'INPUT' or 'CONTAINER' or 'LOAD_CONFIG'
SOURCE_FILE_ID VARCHAR2(200),
SOURCE_FILE_DESC VARCHAR2(2000),
SOURCE_FILE_NAME_PATTERN VARCHAR2(200),
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEEP_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEEP_IN_TRASH CHECK (IS_KEEP_IN_TRASH IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_CONFIG_UQ1 UNIQUE(SOURCE_FILE_TYPE, SOURCE_FILE_ID, TABLE_ID)
) TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,26 @@
-- ====================================================================
-- A_TABLE_STAT Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store current table statistics and archival thresholds
-- NOTE: This is the pre-MARS-1409 structure without:
-- ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED,
-- WORKFLOW_SUCCESS_FILE_COUNT, WORKFLOW_SUCCESS_ROW_COUNT, WORKFLOW_SUCCESS_TOTAL_SIZE
-- Column names: SIZE (not TOTAL_SIZE), OVER_ARCH_THRESOLD_SIZE (not OVER_ARCH_THRESOLD_TOTAL_SIZE)
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT (
A_TABLE_STAT_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCH_THRESHOLD_DAYS NUMBER(4,0),
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
"SIZE" NUMBER(38,0),
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
-- Note: A_TABLE_STAT_UK1 index is auto-created by the UNIQUE constraint definition above.

View File

@@ -0,0 +1,23 @@
-- ====================================================================
-- A_TABLE_STAT_HIST Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store historical table statistics for trend analysis
-- NOTE: This is the pre-MARS-1409 structure without:
-- ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED,
-- WORKFLOW_SUCCESS_FILE_COUNT, WORKFLOW_SUCCESS_ROW_COUNT, WORKFLOW_SUCCESS_TOTAL_SIZE
-- Column names: SIZE (not TOTAL_SIZE), OVER_ARCH_THRESOLD_SIZE (not OVER_ARCH_THRESOLD_TOTAL_SIZE)
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
A_TABLE_STAT_HIST_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCH_THRESHOLD_DAYS NUMBER(4,0),
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
"SIZE" NUMBER(38,0),
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0)
) TABLESPACE "DATA";

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(19) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(50) := 'MRDS Development Team';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.6.3 (2026-01-28): COMPILATION FIX - Resolved ORA-00904 error in EXPORT_PARTITION_PARALLEL. SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE cannot be used directly in SQL UPDATE statements. Now properly assigned to vgMsgTmp variable before UPDATE.' || CHR(10) ||
'v2.6.2 (2026-01-28): CRITICAL FIX - Race condition when multiple exports run simultaneously. Changed DELETE to filter by age (>24h) instead of deleting all COMPLETED chunks. Prevents concurrent sessions from deleting each other chunks. Session-safe cleanup with TASK_NAME filtering. Enables true parallel execution of multiple export jobs.' || CHR(10) ||
'v2.6.1 (2026-01-28): Added DELETE_FAILED_EXPORT_FILE procedure to clean up partial/corrupted files before retry. When partition fails mid-export, partial file is deleted before retry to prevent Oracle from creating _1 suffixed duplicates. Ensures clean retry without orphaned files in OCI bucket.' || CHR(10) ||
'v2.6.0 (2026-01-28): CRITICAL FIX - Added STATUS tracking to A_PARALLEL_EXPORT_CHUNKS table to prevent data duplication on retry. System now restarts ONLY failed partitions instead of re-exporting all data. Added ERROR_MESSAGE and EXPORT_TIMESTAMP columns for better error handling and monitoring. Prevents duplicate file creation when parallel tasks fail (e.g., 22 partitions with 16 threads, 3 failures no longer duplicates 19 successful exports).' || CHR(10) ||
'v2.5.0 (2026-01-26): Added recorddelimiter parameter with CRLF (CHR(13)||CHR(10)) for CSV exports to ensure Windows-compatible line endings. Improves cross-platform compatibility when CSV files are opened in Windows applications (Notepad, Excel).' || CHR(10) ||
'v2.4.0 (2026-01-11): Added pTemplateTableName parameter for per-column date format configuration. Implements dynamic query building with TO_CHAR for each date/timestamp column using FILE_MANAGER.GET_DATE_FORMAT. Supports 3-tier hierarchy: column-specific, template DEFAULT, global fallback. Eliminates single dateformat limitation of DBMS_CLOUD.EXPORT_DATA.' || CHR(10) ||
'v2.3.0 (2025-12-20): Added parallel partition processing using DBMS_PARALLEL_EXECUTE. New pParallelDegree parameter (1-16, default 1) for EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures. Each year/month partition processed in separate thread for improved performance.' || CHR(10) ||
'v2.2.0 (2025-12-19): DRY refactoring - extracted shared helper functions (sanitizeFilename, VALIDATE_TABLE_AND_COLUMNS, GET_PARTITIONS, EXPORT_SINGLE_PARTITION worker procedure). Reduced code duplication by ~400 lines. Prepared architecture for v2.3.0 parallel processing.' || CHR(10) ||
'v2.1.1 (2025-12-04): Fixed JOIN column reference A_WORKFLOW_HISTORY_KEY -> A_ETL_LOAD_SET_KEY, added consistent column mapping and dynamic column list to EXPORT_TABLE_DATA procedure, enhanced DEBUG logging for all export operations' || CHR(10) ||
'v2.1.0 (2025-10-22): Added version tracking and PARTITION_YEAR/PARTITION_MONTH support' || CHR(10) ||
'v2.0.0 (2025-10-01): Separated export functionality from FILE_MANAGER package' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into CSV file on OCI infrustructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports'
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17'
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

View File

@@ -18,6 +18,8 @@ DECLARE
v_env_manager_build VARCHAR2(100);
v_file_archiver_version VARCHAR2(50);
v_file_archiver_build VARCHAR2(100);
v_data_exporter_version VARCHAR2(50);
v_data_exporter_build VARCHAR2(500);
BEGIN
-- Get FILE_MANAGER version
BEGIN
@@ -54,6 +56,18 @@ BEGIN
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve FILE_ARCHIVER version');
END;
-- Get DATA_EXPORTER version
BEGIN
v_data_exporter_version := CT_MRDS.DATA_EXPORTER.GET_VERSION();
v_data_exporter_build := CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO();
DBMS_OUTPUT.PUT_LINE('DATA_EXPORTER Version: ' || v_data_exporter_version);
DBMS_OUTPUT.PUT_LINE('DATA_EXPORTER Build: ' || v_data_exporter_build);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve DATA_EXPORTER version');
END;
-- Insert version records into A_PACKAGE_VERSION_TRACKING
BEGIN
@@ -78,6 +92,13 @@ BEGIN
USING 'CT_MRDS', 'FILE_ARCHIVER', 'BOTH', v_file_archiver_version,
'', '', 'MARS-1409';
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
(PACKAGE_OWNER, PACKAGE_NAME, PACKAGE_TYPE, PACKAGE_VERSION,
PACKAGE_BUILD_DATE, PACKAGE_AUTHOR, TRACKING_DATE, TRACKED_BY_USER, TRACKED_BY_MODULE)
VALUES (:1, :2, :3, :4, :5, :6, SYSTIMESTAMP, USER, :7)'
USING 'CT_MRDS', 'DATA_EXPORTER', 'BOTH', v_data_exporter_version,
'', '', 'MARS-1409';
COMMIT;
DBMS_OUTPUT.PUT_LINE('Package version tracking recorded successfully');
EXCEPTION

View File

@@ -26,13 +26,25 @@ PROMPT CT_MRDS.ENV_MANAGER Package:
SELECT CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.ENV_MANAGER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- FILE_ARCHIVER version
PROMPT
PROMPT CT_MRDS.FILE_ARCHIVER Package:
SELECT CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.FILE_ARCHIVER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- DATA_EXPORTER version
PROMPT
PROMPT CT_MRDS.DATA_EXPORTER Package:
SELECT CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- Package compilation status
PROMPT
PROMPT Package Compilation Status:
SELECT object_name, object_type, status, last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_name, object_type;
@@ -42,7 +54,7 @@ PROMPT Compilation Errors (if any):
SELECT name, type, line, position, text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER')
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
ORDER BY name, type, line, position;
PROMPT

View File

@@ -28,22 +28,22 @@ AS
) IS
BEGIN
BEGIN
ENV_MANAGER.LOG_PROCESS_EVENT('Attempting to delete potentially corrupted file: ' || pFileUri, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Attempting to delete potentially corrupted file: ' || pFileUri, 'DEBUG', pParameters, 'DATA_EXPORTER');
DBMS_CLOUD.DELETE_OBJECT(
credential_name => pCredentialName,
object_uri => pFileUri
);
ENV_MANAGER.LOG_PROCESS_EVENT('Deleted existing file (cleanup before retry): ' || pFileUri, 'INFO', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Deleted existing file (cleanup before retry): ' || pFileUri, 'INFO', pParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN OTHERS THEN
-- Object not found is OK (file doesn't exist)
IF SQLCODE = -20404 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('File does not exist (OK): ' || pFileUri, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('File does not exist (OK): ' || pFileUri, 'DEBUG', pParameters, 'DATA_EXPORTER');
ELSE
-- Log but don't fail - export will attempt anyway
ENV_MANAGER.LOG_PROCESS_EVENT('Warning: Could not delete file (will retry export anyway): ' || SQLERRM, 'WARNING', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Warning: Could not delete file (will retry export anyway): ' || SQLERRM, 'WARNING', pParameters, 'DATA_EXPORTER');
END IF;
END;
END DELETE_FAILED_EXPORT_FILE;
@@ -367,10 +367,10 @@ AS
AND L.LOAD_START < :pMaxDate
ORDER BY YR, MN';
ENV_MANAGER.LOG_PROCESS_EVENT('Executing date range query: ' || vSql, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Executing date range query: ' || vSql, 'DEBUG', pParameters, 'DATA_EXPORTER');
EXECUTE IMMEDIATE vSql BULK COLLECT INTO vKeyValuesYear, vKeyValuesMonth USING pMinDate, pMaxDate;
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vKeyValuesYear.COUNT || ' year/month combinations to export', 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vKeyValuesYear.COUNT || ' year/month combinations to export', 'DEBUG', pParameters, 'DATA_EXPORTER');
-- Convert to partition_tab
vPartitions := partition_tab();
@@ -437,7 +437,7 @@ AS
'PARTITION_MONTH=' || sanitizeFilename(pMonth) || '/' ||
sanitizeFilename(pYear) || sanitizeFilename(pMonth) || '.parquet';
ENV_MANAGER.LOG_PROCESS_EVENT('Parquet export URI: ' || vUri, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Parquet export URI: ' || vUri, 'DEBUG', pParameters, 'DATA_EXPORTER');
-- Delete potentially corrupted file from previous failed attempt
-- This prevents Oracle from creating _1 suffixed files on retry
@@ -456,7 +456,7 @@ AS
CASE WHEN pFolderName IS NOT NULL THEN pFolderName || '/' ELSE '' END ||
sanitizeFilename(vFileName);
ENV_MANAGER.LOG_PROCESS_EVENT('CSV export URI: ' || vUri, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('CSV export URI: ' || vUri, 'DEBUG', pParameters, 'DATA_EXPORTER');
-- Delete potentially corrupted file from previous failed attempt
-- This prevents Oracle from creating _1 suffixed files on retry
@@ -484,8 +484,8 @@ AS
RAISE_APPLICATION_ERROR(-20001, 'Unsupported format: ' || pFormat || '. Use PARQUET or CSV.');
END IF;
ENV_MANAGER.LOG_PROCESS_EVENT('Processing Year/Month: ' || pYear || '/' || pMonth || ' (Format: ' || pFormat || ')', 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export query: ' || vQuery, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Processing Year/Month: ' || pYear || '/' || pMonth || ' (Format: ' || pFormat || ')', 'DEBUG', pParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Export query: ' || vQuery, 'DEBUG', pParameters, 'DATA_EXPORTER');
END EXPORT_SINGLE_PARTITION;
----------------------------------------------------------------------------------------------------
@@ -550,7 +550,7 @@ AS
WHERE CHUNK_ID = pStartId;
vParameters := 'Parallel task - Year: ' || vYear || ', Month: ' || vMonth || ', ChunkID: ' || pStartId;
ENV_MANAGER.LOG_PROCESS_EVENT('Starting parallel export for partition ' || vYear || '/' || vMonth, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Starting parallel export for partition ' || vYear || '/' || vMonth, 'DEBUG', vParameters, 'DATA_EXPORTER');
-- Mark chunk as PROCESSING
UPDATE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS
@@ -586,12 +586,12 @@ AS
WHERE CHUNK_ID = pStartId;
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Completed parallel export for partition ' || vYear || '/' || vMonth, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Completed parallel export for partition ' || vYear || '/' || vMonth, 'DEBUG', vParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN OTHERS THEN
-- Capture error details in variable (SQLERRM cannot be used directly in SQL)
vgMsgTmp := 'Parallel task error for partition ' || vYear || '/' || vMonth || ' (ChunkID: ' || pStartId || '): ' || SQLERRM || cgBL || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
-- Mark chunk as FAILED with error message
-- Use vgMsgTmp variable instead of SQLERRM directly (Oracle limitation in SQL context)
@@ -643,7 +643,7 @@ AS
,'pFolderName => '''||nvl(pFolderName, 'NULL')||''''
,'pCredentialName => '''||nvl(pCredentialName, 'NULL')||''''
));
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters, 'DATA_EXPORTER');
-- Get bucket URI based on bucket area using FILE_MANAGER function
vBucketUri := FILE_MANAGER.GET_BUCKET_URI(pBucketArea);
@@ -692,8 +692,8 @@ AS
-- Process column list to add T. prefix and alias key column as A_WORKFLOW_HISTORY_KEY
vProcessedColumnList := processColumnList(vAllColumnsList, vTableName, vSchemaName, vKeyColumnName);
ENV_MANAGER.LOG_PROCESS_EVENT('Dynamic column list built: ' || vAllColumnsList, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Processed column list with T. prefix: ' || vProcessedColumnList, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Dynamic column list built: ' || vAllColumnsList, 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Processed column list with T. prefix: ' || vProcessedColumnList, 'DEBUG', vParameters, 'DATA_EXPORTER');
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSchemaName) || '.' || DBMS_ASSERT.simple_sql_name(vTableName);
-- Fetch unique key values from A_LOAD_HISTORY
@@ -701,9 +701,9 @@ AS
' FROM ' || vTableName || ' T, CT_ODS.A_LOAD_HISTORY L' ||
' WHERE T.' || DBMS_ASSERT.simple_sql_name(vKeyColumnName) || ' = L.A_ETL_LOAD_SET_KEY';
ENV_MANAGER.LOG_PROCESS_EVENT('Executing key values query: ' || vSql, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Executing key values query: ' || vSql, 'DEBUG', vParameters, 'DATA_EXPORTER');
EXECUTE IMMEDIATE vSql BULK COLLECT INTO vKeyValues;
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vKeyValues.COUNT || ' unique key values to process', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vKeyValues.COUNT || ' unique key values to process', 'DEBUG', vParameters, 'DATA_EXPORTER');
-- Loop over each unique key value
FOR i IN 1 .. vKeyValues.COUNT LOOP
@@ -734,9 +734,9 @@ AS
CASE WHEN pFolderName IS NOT NULL THEN pFolderName || '/' ELSE '' END ||
sanitizeFilename(vKeyValue) || '.csv';
ENV_MANAGER.LOG_PROCESS_EVENT('Processing key value: ' || vKeyValue || ' (' || (i) || '/' || vKeyValues.COUNT || ')', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export query: ' || vQuery, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export URI: ' || vUri, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Processing key value: ' || vKeyValue || ' (' || (i) || '/' || vKeyValues.COUNT || ')', 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Export query: ' || vQuery, 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Export URI: ' || vUri, 'DEBUG', vParameters, 'DATA_EXPORTER');
-- Use DBMS_CLOUD package to export data to the URI
DBMS_CLOUD.EXPORT_DATA(
@@ -746,24 +746,24 @@ AS
format => json_object('type' VALUE 'CSV', 'header' VALUE true)
);
END LOOP;
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO', vParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN ENV_MANAGER.ERR_TABLE_NOT_EXISTS THEN
vgMsgTmp := ENV_MANAGER.MSG_TABLE_NOT_EXISTS ||': '||vTableName;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_TABLE_NOT_EXISTS, vgMsgTmp);
WHEN ENV_MANAGER.ERR_COLUMN_NOT_EXISTS THEN
vgMsgTmp := ENV_MANAGER.MSG_COLUMN_NOT_EXISTS || ' (TableName.ColumnName): ' || vTableName||'.'||vKeyColumnName||CASE WHEN vCurrentCol IS NOT NULL THEN '.'||vCurrentCol||' in column list' ELSE '' END;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_COLUMN_NOT_EXISTS, vgMsgTmp);
WHEN ENV_MANAGER.ERR_UNSUPPORTED_DATA_TYPE THEN
vgMsgTmp := ENV_MANAGER.MSG_UNSUPPORTED_DATA_TYPE || ' vDataType: '||vDataType;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNSUPPORTED_DATA_TYPE, vgMsgTmp);
WHEN OTHERS THEN
-- Log complete error details including full stack trace and backtrace
ENV_MANAGER.LOG_PROCESS_ERROR('Export failed: ' || SQLERRM, vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END EXPORT_TABLE_DATA;
@@ -806,7 +806,7 @@ AS
,'pTemplateTableName => '''||nvl(pTemplateTableName, 'NULL')||''''
,'pCredentialName => '''||nvl(pCredentialName, 'NULL')||''''
));
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters, 'DATA_EXPORTER');
-- Get bucket URI based on bucket area using FILE_MANAGER function
vBucketUri := FILE_MANAGER.GET_BUCKET_URI(pBucketArea);
@@ -822,27 +822,27 @@ AS
-- Build query with TO_CHAR for date columns (per-column format support)
vProcessedColumnList := buildQueryWithDateFormats(pColumnList, vTableName, vSchemaName, vKeyColumnName, pTemplateTableName, 'PARQUET');
ENV_MANAGER.LOG_PROCESS_EVENT('Input column list: ' || NVL(pColumnList, 'NULL (building dynamic list from table metadata)'), 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Processed column list with TO_CHAR for date columns: ' || vProcessedColumnList, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Template table: ' || NVL(pTemplateTableName, 'NULL - using global default for all dates'), 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Input column list: ' || NVL(pColumnList, 'NULL (building dynamic list from table metadata)'), 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Processed column list with TO_CHAR for date columns: ' || vProcessedColumnList, 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Template table: ' || NVL(pTemplateTableName, 'NULL - using global default for all dates'), 'INFO', vParameters, 'DATA_EXPORTER');
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSchemaName) || '.' || DBMS_ASSERT.simple_sql_name(vTableName);
-- Validate parallel degree parameter
IF pParallelDegree < 1 OR pParallelDegree > 16 THEN
vgMsgTmp := ENV_MANAGER.MSG_INVALID_PARALLEL_DEGREE || ': ' || pParallelDegree || '. Valid range: 1-16';
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_INVALID_PARALLEL_DEGREE, vgMsgTmp);
END IF;
-- Get partitions using shared function
vPartitions := GET_PARTITIONS(vSchemaName, vTableName, vKeyColumnName, pMinDate, pMaxDate, vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vPartitions.COUNT || ' partitions to export with parallel degree ' || pParallelDegree, 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vPartitions.COUNT || ' partitions to export with parallel degree ' || pParallelDegree, 'INFO', vParameters, 'DATA_EXPORTER');
-- Sequential processing (parallel degree = 1)
IF pParallelDegree = 1 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Using sequential processing (pParallelDegree = 1)', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Using sequential processing (pParallelDegree = 1)', 'DEBUG', vParameters, 'DATA_EXPORTER');
FOR i IN 1 .. vPartitions.COUNT LOOP
EXPORT_SINGLE_PARTITION(
@@ -868,13 +868,13 @@ AS
ELSE
-- Skip parallel processing if no partitions found
IF vPartitions.COUNT = 0 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('No partitions to export - skipping parallel processing', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('No partitions to export - skipping parallel processing', 'INFO', vParameters, 'DATA_EXPORTER');
ELSE
DECLARE
vTaskName VARCHAR2(128) := 'DATA_EXPORT_TASK_' || TO_CHAR(SYSTIMESTAMP, 'YYYYMMDDHH24MISSFF');
vChunkId NUMBER;
BEGIN
ENV_MANAGER.LOG_PROCESS_EVENT('Using parallel processing with ' || pParallelDegree || ' threads', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Using parallel processing with ' || pParallelDegree || ' threads', 'INFO', vParameters, 'DATA_EXPORTER');
-- Clean up old completed chunks (>24 hours) to prevent table bloat
-- CRITICAL: Do NOT delete chunks from other active sessions (same-day tasks)
@@ -884,12 +884,12 @@ AS
AND CREATED_DATE < SYSTIMESTAMP - INTERVAL '1' DAY;
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Cleared old COMPLETED chunks (>24h). Active session chunks preserved.', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Cleared old COMPLETED chunks (>24h). Active session chunks preserved.', 'DEBUG', vParameters, 'DATA_EXPORTER');
-- This prevents re-exporting successfully completed partitions
DELETE FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'COMPLETED';
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Cleared COMPLETED chunks. FAILED chunks retained for retry.', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Cleared COMPLETED chunks. FAILED chunks retained for retry.', 'DEBUG', vParameters, 'DATA_EXPORTER');
-- Populate chunks table (insert new chunks, preserve FAILED chunks for retry)
FOR i IN 1 .. vPartitions.COUNT LOOP
@@ -918,7 +918,7 @@ AS
SELECT COUNT(*) INTO vPendingCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'PENDING';
SELECT COUNT(*) INTO vFailedCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'FAILED';
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics: PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics: PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters, 'DATA_EXPORTER');
END;
-- Create parallel task
@@ -934,7 +934,7 @@ AS
);
-- Execute task in parallel
ENV_MANAGER.LOG_PROCESS_EVENT('Executing parallel task: ' || vTaskName, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Executing parallel task: ' || vTaskName, 'DEBUG', vParameters, 'DATA_EXPORTER');
DBMS_PARALLEL_EXECUTE.RUN_TASK(
task_name => vTaskName,
@@ -953,7 +953,7 @@ AS
IF vErrorCount > 0 THEN
vgMsgTmp := 'Parallel execution completed with ' || vErrorCount || ' errors. Check USER_PARALLEL_EXECUTE_CHUNKS for details.';
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_PARALLEL_EXECUTION_FAILED, vgMsgTmp);
END IF;
END;
@@ -966,7 +966,7 @@ AS
DELETE FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE TASK_NAME = vTaskName;
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Parallel execution completed successfully', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Parallel execution completed successfully', 'INFO', vParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN OTHERS THEN
-- Attempt to drop task on error
@@ -977,21 +977,21 @@ AS
END;
vgMsgTmp := ENV_MANAGER.MSG_PARALLEL_EXECUTION_FAILED || ': ' || SQLERRM || cgBL || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_PARALLEL_EXECUTION_FAILED, vgMsgTmp);
END;
END IF;
END IF;
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO', vParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN ENV_MANAGER.ERR_TABLE_NOT_EXISTS THEN
vgMsgTmp := ENV_MANAGER.MSG_TABLE_NOT_EXISTS ||': '||vTableName;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_TABLE_NOT_EXISTS, vgMsgTmp);
WHEN ENV_MANAGER.ERR_COLUMN_NOT_EXISTS THEN
vgMsgTmp := ENV_MANAGER.MSG_COLUMN_NOT_EXISTS || ' (TableName.ColumnName): ' || vTableName||'.'||vKeyColumnName||CASE WHEN vCurrentCol IS NOT NULL THEN '.'||vCurrentCol||' in pColumnList' ELSE '' END;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_COLUMN_NOT_EXISTS, vgMsgTmp);
WHEN ENV_MANAGER.ERR_INVALID_PARALLEL_DEGREE THEN
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_INVALID_PARALLEL_DEGREE, vgMsgTmp);
@@ -1000,7 +1000,7 @@ AS
WHEN OTHERS THEN
-- Log complete error details including full stack trace and backtrace
ENV_MANAGER.LOG_PROCESS_ERROR('Export failed: ' || SQLERRM, vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END EXPORT_TABLE_DATA_BY_DATE;
@@ -1073,7 +1073,7 @@ AS
,'pMaxFileSize => '''||nvl(TO_CHAR(pMaxFileSize), 'NULL')||''''
,'pCredentialName => '''||nvl(pCredentialName, 'NULL')||''''
));
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters, 'DATA_EXPORTER');
-- Get bucket URI based on bucket area using FILE_MANAGER function
vBucketUri := FILE_MANAGER.GET_BUCKET_URI(pBucketArea);
@@ -1105,29 +1105,29 @@ AS
-- Build query with TO_CHAR for date columns (per-column format support)
vProcessedColumnList := buildQueryWithDateFormats(pColumnList, vTableName, vSchemaName, vKeyColumnName, pTemplateTableName);
ENV_MANAGER.LOG_PROCESS_EVENT('Input column list: ' || NVL(pColumnList, 'NULL (using dynamic column list)'), 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Processed column list with TO_CHAR for date columns: ' || vProcessedColumnList, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Template table: ' || NVL(pTemplateTableName, 'NULL - using global default for all dates'), 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Input column list: ' || NVL(pColumnList, 'NULL (using dynamic column list)'), 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Processed column list with TO_CHAR for date columns: ' || vProcessedColumnList, 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Template table: ' || NVL(pTemplateTableName, 'NULL - using global default for all dates'), 'INFO', vParameters, 'DATA_EXPORTER');
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSchemaName) || '.' || DBMS_ASSERT.simple_sql_name(vTableName);
-- Validate parallel degree parameter
IF pParallelDegree < 1 OR pParallelDegree > 16 THEN
vgMsgTmp := ENV_MANAGER.MSG_INVALID_PARALLEL_DEGREE || ': ' || pParallelDegree || '. Valid range: 1-16';
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_INVALID_PARALLEL_DEGREE, vgMsgTmp);
END IF;
-- Get partitions using shared function
vPartitions := GET_PARTITIONS(vSchemaName, vTableName, vKeyColumnName, pMinDate, pMaxDate, vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vPartitions.COUNT || ' year/month combinations to export', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Date range: ' || TO_CHAR(pMinDate, 'YYYY-MM-DD HH24:MI:SS') || ' to ' || TO_CHAR(pMaxDate, 'YYYY-MM-DD HH24:MI:SS'), 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Parallel degree: ' || pParallelDegree, 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Found ' || vPartitions.COUNT || ' year/month combinations to export', 'INFO', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Date range: ' || TO_CHAR(pMinDate, 'YYYY-MM-DD HH24:MI:SS') || ' to ' || TO_CHAR(pMaxDate, 'YYYY-MM-DD HH24:MI:SS'), 'DEBUG', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('Parallel degree: ' || pParallelDegree, 'INFO', vParameters, 'DATA_EXPORTER');
-- Sequential processing (parallel degree = 1)
IF pParallelDegree = 1 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Using sequential processing (pParallelDegree = 1)', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Using sequential processing (pParallelDegree = 1)', 'DEBUG', vParameters, 'DATA_EXPORTER');
FOR i IN 1 .. vPartitions.COUNT LOOP
EXPORT_SINGLE_PARTITION(
@@ -1153,13 +1153,13 @@ AS
ELSE
-- Skip parallel processing if no partitions found
IF vPartitions.COUNT = 0 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('No partitions to export - skipping parallel CSV processing', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('No partitions to export - skipping parallel CSV processing', 'INFO', vParameters, 'DATA_EXPORTER');
ELSE
DECLARE
vTaskName VARCHAR2(128) := 'DATA_CSV_EXPORT_TASK_' || TO_CHAR(SYSTIMESTAMP, 'YYYYMMDDHH24MISSFF');
vChunkId NUMBER;
BEGIN
ENV_MANAGER.LOG_PROCESS_EVENT('Using parallel processing with ' || pParallelDegree || ' threads', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Using parallel processing with ' || pParallelDegree || ' threads', 'INFO', vParameters, 'DATA_EXPORTER');
-- Clean up old completed chunks (>24 hours) to prevent table bloat
-- CRITICAL: Do NOT delete chunks from other active sessions (same-day tasks)
@@ -1169,7 +1169,7 @@ AS
AND CREATED_DATE < SYSTIMESTAMP - INTERVAL '1' DAY;
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Cleared old COMPLETED chunks (>24h). Active session chunks preserved.', 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Cleared old COMPLETED chunks (>24h). Active session chunks preserved.', 'DEBUG', vParameters, 'DATA_EXPORTER');
-- Populate chunks table (insert new chunks, preserve FAILED chunks for retry)
FOR i IN 1 .. vPartitions.COUNT LOOP
@@ -1198,7 +1198,7 @@ AS
SELECT COUNT(*) INTO vPendingCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'PENDING';
SELECT COUNT(*) INTO vFailedCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'FAILED';
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics: PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics: PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters, 'DATA_EXPORTER');
END;
-- Create parallel task
@@ -1214,7 +1214,7 @@ AS
);
-- Execute task in parallel
ENV_MANAGER.LOG_PROCESS_EVENT('Executing parallel CSV export task: ' || vTaskName, 'DEBUG', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Executing parallel CSV export task: ' || vTaskName, 'DEBUG', vParameters, 'DATA_EXPORTER');
DBMS_PARALLEL_EXECUTE.RUN_TASK(
task_name => vTaskName,
@@ -1233,7 +1233,7 @@ AS
IF vErrorCount > 0 THEN
vgMsgTmp := 'Parallel CSV export completed with ' || vErrorCount || ' errors. Check USER_PARALLEL_EXECUTE_CHUNKS for details.';
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_PARALLEL_EXECUTION_FAILED, vgMsgTmp);
END IF;
END;
@@ -1246,7 +1246,7 @@ AS
DELETE FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE TASK_NAME = vTaskName;
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Parallel CSV execution completed successfully', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Parallel CSV execution completed successfully', 'INFO', vParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN OTHERS THEN
-- Attempt to drop task on error
@@ -1257,23 +1257,23 @@ AS
END;
vgMsgTmp := ENV_MANAGER.MSG_PARALLEL_EXECUTION_FAILED || ': ' || SQLERRM || cgBL || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_PARALLEL_EXECUTION_FAILED, vgMsgTmp);
END;
END IF;
END IF;
ENV_MANAGER.LOG_PROCESS_EVENT('Export completed successfully for ' || vPartitions.COUNT || ' files', 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export completed successfully for ' || vPartitions.COUNT || ' files', 'INFO', vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO', vParameters, 'DATA_EXPORTER');
EXCEPTION
WHEN ENV_MANAGER.ERR_TABLE_NOT_EXISTS THEN
vgMsgTmp := ENV_MANAGER.MSG_TABLE_NOT_EXISTS ||': '||vTableName;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_TABLE_NOT_EXISTS, vgMsgTmp);
WHEN ENV_MANAGER.ERR_COLUMN_NOT_EXISTS THEN
vgMsgTmp := ENV_MANAGER.MSG_COLUMN_NOT_EXISTS || ' (TableName.ColumnName): ' || vTableName||'.'||vKeyColumnName||CASE WHEN vCurrentCol IS NOT NULL THEN '.'||vCurrentCol||' in pColumnList' ELSE '' END;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_COLUMN_NOT_EXISTS, vgMsgTmp);
WHEN ENV_MANAGER.ERR_INVALID_PARALLEL_DEGREE THEN
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_INVALID_PARALLEL_DEGREE, vgMsgTmp);
@@ -1282,7 +1282,7 @@ AS
WHEN OTHERS THEN
-- Log complete error details including full stack trace and backtrace
ENV_MANAGER.LOG_PROCESS_ERROR('Export failed: ' || SQLERRM, vParameters, 'DATA_EXPORTER');
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters, 'DATA_EXPORTER');
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END EXPORT_TABLE_DATA_TO_CSV_BY_DATE;

View File

@@ -39,6 +39,8 @@ AS
Errors(CODE_MOVE_FILE_TO_TRASH_FAILED) := Error_Record(CODE_MOVE_FILE_TO_TRASH_FAILED, MSG_MOVE_FILE_TO_TRASH_FAILED); -- -20032
Errors(CODE_DROP_EXPORTED_FILES_FAILED) := Error_Record(CODE_DROP_EXPORTED_FILES_FAILED, MSG_DROP_EXPORTED_FILES_FAILED); -- -20033
Errors(CODE_INVALID_BUCKET_AREA) := Error_Record(CODE_INVALID_BUCKET_AREA, MSG_INVALID_BUCKET_AREA); -- -20034
Errors(CODE_WORKFLOW_KEY_NULL) := Error_Record(CODE_WORKFLOW_KEY_NULL, MSG_WORKFLOW_KEY_NULL); -- -20035
Errors(CODE_MULTIPLE_WORKFLOW_KEYS) := Error_Record(CODE_MULTIPLE_WORKFLOW_KEYS, MSG_MULTIPLE_WORKFLOW_KEYS); -- -20036
Errors(CODE_INVALID_PARALLEL_DEGREE) := Error_Record(CODE_INVALID_PARALLEL_DEGREE, MSG_INVALID_PARALLEL_DEGREE); -- -20110
Errors(CODE_PARALLEL_EXECUTION_FAILED) := Error_Record(CODE_PARALLEL_EXECUTION_FAILED, MSG_PARALLEL_EXECUTION_FAILED); -- -20111

View File

@@ -17,12 +17,13 @@ AS
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.2.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2025-12-20 10:00:00';
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.3.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-27 09:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.0 (2026-02-27): MARS-1409 - Added error codes for A_WORKFLOW_HISTORY_KEY validation (CODE_WORKFLOW_KEY_NULL -20035, CODE_MULTIPLE_WORKFLOW_KEYS -20036)' || CHR(13)||CHR(10) ||
'3.2.0 (2025-12-20): Added error codes for parallel execution support (CODE_INVALID_PARALLEL_DEGREE -20110, CODE_PARALLEL_EXECUTION_FAILED -20111)' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-22): Added package hash tracking and automatic change detection system (SHA256 hashing)' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-22): Added package versioning system with centralized version management functions' || CHR(13)||CHR(10) ||
@@ -297,6 +298,18 @@ AS
PRAGMA EXCEPTION_INIT( ERR_INVALID_BUCKET_AREA
,CODE_INVALID_BUCKET_AREA);
ERR_WORKFLOW_KEY_NULL EXCEPTION;
CODE_WORKFLOW_KEY_NULL CONSTANT PLS_INTEGER := -20035;
MSG_WORKFLOW_KEY_NULL VARCHAR2(4000) := 'File validation failed: A_WORKFLOW_HISTORY_KEY column contains NULL value';
PRAGMA EXCEPTION_INIT( ERR_WORKFLOW_KEY_NULL
,CODE_WORKFLOW_KEY_NULL);
ERR_MULTIPLE_WORKFLOW_KEYS EXCEPTION;
CODE_MULTIPLE_WORKFLOW_KEYS CONSTANT PLS_INTEGER := -20036;
MSG_MULTIPLE_WORKFLOW_KEYS VARCHAR2(4000) := 'File validation failed: Multiple distinct A_WORKFLOW_HISTORY_KEY values found in file. Each file must contain exactly one workflow execution key';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_WORKFLOW_KEYS
,CODE_MULTIPLE_WORKFLOW_KEYS);
ERR_INVALID_PARALLEL_DEGREE EXCEPTION;
CODE_INVALID_PARALLEL_DEGREE CONSTANT PLS_INTEGER := -20110;
MSG_INVALID_PARALLEL_DEGREE VARCHAR2(4000) := 'Invalid parallel degree parameter. Must be between 1 and 16';
@@ -311,7 +324,7 @@ AS
ERR_UNKNOWN EXCEPTION;
CODE_UNKNOWN CONSTANT PLS_INTEGER := -20999;
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occured';
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occurred';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN
,CODE_UNKNOWN);

View File

@@ -127,11 +127,11 @@ AS
if vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.FILES_COUNT_OVER_ARCHIVE_THRESHOLD then vArchivalTriggeredBy := 'FILES_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ROWS_COUNT_OVER_ARCHIVE_THRESHOLD then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.BYTES_SUM_OVER_ARCHIVE_THRESHOLD then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
else ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO');
else ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO', vParameters);
end if;
if LENGTH(vArchivalTriggeredBy)>0 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Archival Triggered By: '||vArchivalTriggeredBy,'INFO');
ENV_MANAGER.LOG_PROCESS_EVENT('Archival Triggered By: '||vArchivalTriggeredBy,'INFO', vParameters);
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSourceFileConfig.ODS_SCHEMA_NAME) || '.'||vSourceFileConfig.A_SOURCE_KEY||'_'||DBMS_ASSERT.simple_sql_name(vSourceFileConfig.TABLE_ID)||'_ODS';
-- Use strategy-based WHERE clause (MARS-828)
@@ -171,7 +171,7 @@ AS
;
vUri := FILE_MANAGER.GET_BUCKET_URI('ARCHIVE')||vSourceFileConfig.A_SOURCE_KEY||'/'||vSourceFileConfig.TABLE_ID||'/PARTITION_YEAR='||ym_loop.year||'/PARTITION_MONTH='||ym_loop.month||'/';
ENV_MANAGER.LOG_PROCESS_EVENT('Start Archiving for YEAR_MONTH: '||ym_loop.year||'_'||ym_loop.month ,'INFO');
ENV_MANAGER.LOG_PROCESS_EVENT('Start Archiving for YEAR_MONTH: '||ym_loop.year||'_'||ym_loop.month ,'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Parameter for DBMS_CLOUD.EXPORT_DATA => file_uri_list' ,'DEBUG',vUri);
ENV_MANAGER.LOG_PROCESS_EVENT('Parameter for DBMS_CLOUD.EXPORT_DATA => query' ,'DEBUG',vQuery);
@@ -193,7 +193,7 @@ AS
END;
ENV_MANAGER.LOG_PROCESS_EVENT('vOperationId of export: '||vOperationId,'DEBUG');
ENV_MANAGER.LOG_PROCESS_EVENT('vOperationId of export: '||vOperationId,'DEBUG', vParameters);
-- Get USER_LOAD_OPERATIONS info
select *
@@ -250,10 +250,10 @@ AS
target_object_uri => replace(f.pathname,'ODS','TRASH')||'/'||f.filename,
target_credential_name => ENV_MANAGER.gvCredentialName
);
ENV_MANAGER.LOG_PROCESS_EVENT('File moved to TRASH.','DEBUG', f.pathname||'/'||f.filename);
ENV_MANAGER.LOG_PROCESS_EVENT('File moved to TRASH: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Failed to move file to TRASH.','ERROR', f.pathname||'/'||f.filename);
ENV_MANAGER.LOG_PROCESS_EVENT('Failed to move file to TRASH: '||f.pathname||'/'||f.filename,'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
rollback;
vProcessControlStatus := 'MOVE_FILE_TO_TRASH_FAILURE';
@@ -269,7 +269,7 @@ AS
--Drop file from TRASH
DBMS_CLOUD.DELETE_OBJECT(credential_name => ENV_MANAGER.gvCredentialName,
object_uri => replace(f.pathname,'ODS','TRASH')||'/'||f.filename);
ENV_MANAGER.LOG_PROCESS_EVENT('File dropped from TRASH.','DEBUG', f.pathname||'/'||f.filename);
ENV_MANAGER.LOG_PROCESS_EVENT('File dropped from TRASH: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
END LOOP;
--ROLLBACK PART
@@ -290,7 +290,7 @@ AS
target_object_uri => f.pathname||'/'||f.filename,
target_credential_name => ENV_MANAGER.gvCredentialName
);
ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH.','DEBUG', f.pathname||'/'||f.filename);
ENV_MANAGER.LOG_PROCESS_EVENT('File restored from TRASH: '||f.pathname||'/'||f.filename,'DEBUG', vParameters);
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED r
SET PROCESSING_STATUS = 'INGESTED'
@@ -301,7 +301,7 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH.','ERROR', replace(f.pathname,'ODS','TRASH')||'/'||f.filename);
ENV_MANAGER.LOG_PROCESS_EVENT('Failed to restore file from TRASH: '||replace(f.pathname,'ODS','TRASH')||'/'||f.filename,'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
vProcessControlStatus := 'RESTORE_FILE_FROM_TRASH_FAILURE';
END;
@@ -309,18 +309,18 @@ AS
DBMS_CLOUD.DELETE_OBJECT(credential_name => ENV_MANAGER.gvCredentialName,
object_uri => vUri||vFilename);
ENV_MANAGER.LOG_PROCESS_EVENT('ROLLBACK operation: Archival PARQUET file dropped.','DEBUG', vUri||vFilename);
ENV_MANAGER.LOG_PROCESS_EVENT('ROLLBACK operation: Archival PARQUET file dropped: '||vUri||vFilename,'DEBUG', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MOVE_FILE_TO_TRASH_FAILED, ENV_MANAGER.MSG_MOVE_FILE_TO_TRASH_FAILED);
ELSIF vProcessControlStatus = 'CHANGE_STATUS_TO_ARCHIVED_FAILURE' THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_CHANGE_STAT_TO_ARCHIVED_FAILED, 'ERROR', vParameters);
DBMS_CLOUD.DELETE_OBJECT(credential_name => ENV_MANAGER.gvCredentialName,
object_uri => vUri||vFilename);
ENV_MANAGER.LOG_PROCESS_EVENT('Archival PARQUET file dropped.','DEBUG', vUri||vFilename);
ENV_MANAGER.LOG_PROCESS_EVENT('Archival PARQUET file dropped: '||vUri||vFilename,'DEBUG', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_CHANGE_STAT_TO_ARCHIVED_FAILED, ENV_MANAGER.MSG_CHANGE_STAT_TO_ARCHIVED_FAILED);
ELSIF vProcessControlStatus = 'RESTORE_FILE_FROM_TRASH_FAILURE' THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Some files were not restored from TRASH. Check A_PROCESS_LOG table for details','ERROR');
ENV_MANAGER.LOG_PROCESS_EVENT('Some files were not restored from TRASH. Check A_PROCESS_LOG table for details','ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_RESTORE_FILE_FROM_TRASH, ENV_MANAGER.MSG_RESTORE_FILE_FROM_TRASH);
END IF;
@@ -394,11 +394,11 @@ AS
vSourceFileConfig := FILE_MANAGER.GET_SOURCE_FILE_CONFIG(pSourceFileConfigKey => pSourceFileConfigKey);
vTableName := DBMS_ASSERT.SCHEMA_NAME(vSourceFileConfig.ODS_SCHEMA_NAME) || '.'||vSourceFileConfig.A_SOURCE_KEY||'_'||DBMS_ASSERT.simple_sql_name(vSourceFileConfig.TABLE_ID)||'_ODS';
ENV_MANAGER.LOG_PROCESS_EVENT('vTableName','DEBUG',vTableName);
ENV_MANAGER.LOG_PROCESS_EVENT('vTableName = '||vTableName, 'DEBUG', vParameters);
-- Get WHERE clause based on archival strategy (MARS-828)
vWhereClause := GET_ARCHIVAL_WHERE_CLAUSE(vSourceFileConfig);
ENV_MANAGER.LOG_PROCESS_EVENT('vWhereClause','DEBUG',vWhereClause);
ENV_MANAGER.LOG_PROCESS_EVENT('vWhereClause = '||vWhereClause, 'DEBUG', vParameters);
-- Use strategy-based WHERE clause for statistics (MARS-828)
vQuery :=
@@ -439,7 +439,7 @@ AS
) r
on t.filename = r.object_name'
;
ENV_MANAGER.LOG_PROCESS_EVENT('vQuery','DEBUG',vQuery);
ENV_MANAGER.LOG_PROCESS_EVENT('vQuery:', 'DEBUG', vQuery);
execute immediate vQuery into vStats;
vStats.A_TABLE_STAT_KEY := CT_MRDS.A_TABLE_STAT_KEY_SEQ.NEXTVAL;

View File

@@ -304,7 +304,6 @@ AS
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_FILE_ALREADY_REGISTERED, vgMsgTmp);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -331,7 +330,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -549,7 +547,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MISSING_COLUMN_DATE_FORMAT, ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -842,7 +839,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END DROP_EXTERNAL_TABLE;
@@ -883,7 +879,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
END COPY_FILE;
@@ -942,7 +937,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_NO_CONFIG_FOR_RECEIVED_FILE, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_NO_CONFIG_FOR_RECEIVED_FILE, ENV_MANAGER.MSG_NO_CONFIG_FOR_RECEIVED_FILE);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
END MOVE_FILE;
@@ -1156,7 +1150,6 @@ AS
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
@@ -1356,7 +1349,6 @@ AS
-- ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_COLUMN_DATE_FORMAT, 'ERROR', vParameters);
-- RAISE_ERROR(ENV_MANAGER.CODE_MISSING_COLUMN_DATE_FORMAT);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END GENERATE_EXTERNAL_TABLE_PARAMS;
@@ -1376,7 +1368,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_DUPLICATED_SOURCE_KEY, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_DUPLICATED_SOURCE_KEY, ENV_MANAGER.MSG_DUPLICATED_SOURCE_KEY);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END ADD_SOURCE;
@@ -1463,7 +1454,6 @@ AS
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END DELETE_SOURCE_CASCADE;
@@ -1497,7 +1487,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MULTIPLE_CONTAINER_ENTRIES, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MULTIPLE_CONTAINER_ENTRIES, ENV_MANAGER.MSG_MULTIPLE_CONTAINER_ENTRIES);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -1540,7 +1529,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MULTIPLE_MATCH_FOR_SRCFILE, vgMsgTmp);
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.MSG_UNKNOWN);
@@ -1589,7 +1577,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_MISSING_SOURCE_KEY, 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_MISSING_SOURCE_KEY, ENV_MANAGER.MSG_MISSING_SOURCE_KEY);
ELSE
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN , 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END IF;
@@ -1618,7 +1605,6 @@ AS
ENV_MANAGER.LOG_PROCESS_EVENT('End','DEBUG',vParameters);
EXCEPTION
WHEN OTHERS THEN
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.MSG_UNKNOWN, 'ERROR', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT(ENV_MANAGER.GET_ERROR_STACK(pFormat => 'TABLE', pCode=> SQLCODE), 'ERROR', vParameters);
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_UNKNOWN, ENV_MANAGER.GET_ERROR_STACK(pFormat => 'OUTPUT', pCode=> SQLCODE));
END ADD_COLUMN_DATE_FORMAT;

View File

@@ -4,6 +4,7 @@
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- MARS-1409: Added IS_WORKFLOW_SUCCESS_REQUIRED flag for workflow bypass
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
@@ -16,20 +17,22 @@ CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
DAYS_FOR_ARCHIVE_THRESHOLD NUMBER(4,0),
FILES_COUNT_OVER_ARCHIVE_THRESHOLD NUMBER(38,0),
BYTES_SUM_OVER_ARCHIVE_THRESHOLD NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD NUMBER(38,0),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
ARCHIVE_ENABLED CHAR(1) DEFAULT 'Y' NOT NULL,
KEEP_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEPT_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT 'Y' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_ARCHIVE_ENABLED CHECK (ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_KEEP_IN_TRASH CHECK (KEEP_IN_TRASH IN ('Y', 'N')),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEPT_IN_TRASH CHECK (IS_KEPT_IN_TRASH IN ('Y', 'N')),
CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
@@ -47,10 +50,67 @@ ON "CT_MRDS"."A_SOURCE_FILE_CONFIG" ("SOURCE_FILE_TYPE", "SOURCE_FILE_ID", "TABL
TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS 'Archival strategy: THRESHOLD_BASED, CURRENT_MONTH_ONLY, MINIMUM_AGE_MONTHS, HYBRID. Added in MARS-828';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS 'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS strategy). Added in MARS-828';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS 'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2). Added in MARS-1049';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_ENABLED IS 'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process. Added in MARS-828 v3.3.0';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.KEEP_IN_TRASH IS 'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy. Added in MARS-828 v3.3.0';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS
'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -16,9 +16,16 @@ CREATE TABLE CT_MRDS.A_TABLE_STAT (
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCHIVAL_STRATEGY IS 'Archival strategy used when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409)';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months used when statistics were gathered. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Complements ARCH_THRESHOLD_DAYS. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409)';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Reflects IS_WORKFLOW_SUCCESS_REQUIRED value from A_SOURCE_FILE_CONFIG at time statistics were gathered. Y=counts include only WORKFLOW_SUCCESSFUL=Y rows, N=all rows counted regardless of workflow status. Added MARS-1409';
-- Unique constraint index (from production export)
CREATE UNIQUE INDEX "CT_MRDS"."A_TABLE_STAT_UK1"
ON "CT_MRDS"."A_TABLE_STAT" ("A_SOURCE_FILE_CONFIG_KEY")

View File

@@ -15,5 +15,12 @@ CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
SIZE NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP
) TABLESPACE "DATA";
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1)
) TABLESPACE "DATA";
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCHIVAL_STRATEGY IS 'Archival strategy used when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409)';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months used when statistics were gathered. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Complements ARCH_THRESHOLD_DAYS. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409)';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Reflects IS_WORKFLOW_SUCCESS_REQUIRED value from A_SOURCE_FILE_CONFIG at time statistics were gathered. Y=counts include only WORKFLOW_SUCCESSFUL=Y rows, N=all rows counted regardless of workflow status. Added MARS-1409';

View File

@@ -10,13 +10,14 @@ The FILE_ARCHIVER package provides flexible archival strategies that accommodate
- **Three Archival Strategies**: THRESHOLD_BASED, MINIMUM_AGE_MONTHS (with 0=current month only), HYBRID
- **Flexible Configuration**: Per-table archival strategy configuration via A_SOURCE_FILE_CONFIG
- **Workflow Bypass**: IS_WORKFLOW_SUCCESS_REQUIRED flag allows archivization of files from non-DBT sources
- **Validation**: Automatic validation of strategy-specific configuration requirements
### Package Information
- **Schema**: CT_MRDS
- **Package**: FILE_ARCHIVER
- **Current Version**: 3.3.0
- **Current Version**: 3.4.0
- **Dependencies**: ENV_MANAGER, FILE_MANAGER, A_SOURCE_FILE_CONFIG, A_SOURCE_FILE_RECEIVED, A_WORKFLOW_HISTORY
### Critical Prerequisites
@@ -67,7 +68,7 @@ END;
| Strategy | WHERE Clause Logic | Configuration Required | Primary Use Case |
|----------|-------------------|----------------------|------------------|
| `THRESHOLD_BASED` | Days since workflow start > threshold | DAYS_FOR_ARCHIVE_THRESHOLD | Simple time-based archival |
| `THRESHOLD_BASED` | Days since workflow start > threshold | ARCHIVE_THRESHOLD_DAYS | Simple time-based archival |
| `MINIMUM_AGE_MONTHS` | Archive data older than X months (0=current month only) | MINIMUM_AGE_MONTHS (≥0) | All sources - flexible retention (0 for LM, 6 for CSDB) |
| `HYBRID` | Combines month boundary + minimum age | MINIMUM_AGE_MONTHS | Advanced retention scenarios |
@@ -77,14 +78,14 @@ Archives data based on number of days since workflow start.
**WHERE Clause**:
```sql
extract(day from (systimestamp - workflow_start)) > DAYS_FOR_ARCHIVE_THRESHOLD
extract(day from (systimestamp - workflow_start)) > ARCHIVE_THRESHOLD_DAYS
```
**Configuration**:
```sql
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = 'THRESHOLD_BASED',
DAYS_FOR_ARCHIVE_THRESHOLD = 30,
ARCHIVE_THRESHOLD_DAYS = 30,
MINIMUM_AGE_MONTHS = NULL
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'C2D_DATA'
@@ -185,17 +186,17 @@ END IF;
**Behavior**: Archives data **only when** at least one of the following thresholds is exceeded:
1. **FILES_COUNT_OVER_ARCHIVE_THRESHOLD** - Number of files eligible for archival
2. **ROWS_COUNT_OVER_ARCHIVE_THRESHOLD** - Number of rows eligible for archival
3. **BYTES_SUM_OVER_ARCHIVE_THRESHOLD** - Total size in bytes eligible for archival
1. **ARCHIVE_THRESHOLD_FILES_COUNT** - Number of files eligible for archival
2. **ARCHIVE_THRESHOLD_ROWS_COUNT** - Number of rows eligible for archival
3. **ARCHIVE_THRESHOLD_BYTES_SUM** - Total size in bytes eligible for archival
```sql
-- Executed for THRESHOLD_BASED and HYBRID strategies
IF vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.FILES_COUNT_OVER_ARCHIVE_THRESHOLD THEN
IF vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_FILES_COUNT THEN
vArchivalTriggeredBy := 'FILES_COUNT';
ELSIF vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ROWS_COUNT_OVER_ARCHIVE_THRESHOLD THEN
ELSIF vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT THEN
vArchivalTriggeredBy := 'ROWS_COUNT';
ELSIF vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.BYTES_SUM_OVER_ARCHIVE_THRESHOLD THEN
ELSIF vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM THEN
vArchivalTriggeredBy := 'BYTES_SUM';
END IF;
```
@@ -206,9 +207,9 @@ END IF;
```sql
-- Set archival thresholds for THRESHOLD_BASED strategy
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 10, -- Archive when 10+ files eligible
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 100000, -- Archive when 100k+ rows eligible
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 104857600 -- Archive when 100MB+ eligible
SET ARCHIVE_THRESHOLD_FILES_COUNT = 10, -- Archive when 10+ files eligible
ARCHIVE_THRESHOLD_ROWS_COUNT = 100000, -- Archive when 100k+ rows eligible
ARCHIVE_THRESHOLD_BYTES_SUM = 104857600 -- Archive when 100MB+ eligible
WHERE ARCHIVAL_STRATEGY = 'THRESHOLD_BASED'
AND TABLE_ID = 'YOUR_TABLE';
```
@@ -243,15 +244,15 @@ WHERE ...;
## Archival Control Configuration
### ARCHIVE_ENABLED Column
### IS_ARCHIVE_ENABLED Column
Controls whether archival is enabled for specific table configuration.
**Column**: `A_SOURCE_FILE_CONFIG.ARCHIVE_ENABLED` (VARCHAR2(1), DEFAULT 'Y')
**Column**: `A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED` (CHAR(1), DEFAULT 'N' NOT NULL)
**Values**:
- `'Y'` (default) - Table is eligible for archival processing
- `'N'` - Table is excluded from archival (batch operations skip this config)
- `'Y'` - Table is eligible for archival processing
- `'N'` (default) - Table is excluded from archival (batch operations skip this config)
**Use Cases**:
- Disable archival for specific tables without removing configuration
@@ -262,7 +263,7 @@ Controls whether archival is enabled for specific table configuration.
```sql
-- Disable archival for specific table
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVE_ENABLED = 'N'
SET IS_ARCHIVE_ENABLED = 'N'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'CSDB'
AND TABLE_ID = 'CSDB_DEBT';
@@ -270,7 +271,7 @@ COMMIT;
-- Re-enable archival
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVE_ENABLED = 'Y'
SET IS_ARCHIVE_ENABLED = 'Y'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'CSDB'
AND TABLE_ID = 'CSDB_DEBT';
@@ -280,22 +281,22 @@ COMMIT;
SELECT
SOURCE_FILE_ID,
TABLE_ID,
ARCHIVE_ENABLED,
IS_ARCHIVE_ENABLED,
ARCHIVAL_STRATEGY
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
ORDER BY SOURCE_FILE_ID, TABLE_ID;
```
### KEEP_IN_TRASH Column
### IS_KEPT_IN_TRASH Column
Controls TRASH folder retention policy for archived files.
**Column**: `A_SOURCE_FILE_CONFIG.KEEP_IN_TRASH` (VARCHAR2(1), DEFAULT 'Y')
**Column**: `A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH` (CHAR(1), DEFAULT 'N' NOT NULL)
**Values**:
- `'Y'` (default) - CSV files kept in TRASH folder after archival (status: ARCHIVED_AND_TRASHED)
- `'N'` - CSV files deleted from TRASH folder after archival (status: ARCHIVED_AND_PURGED)
- `'Y'` - CSV files kept in TRASH folder after archival (status: ARCHIVED_AND_TRASHED)
- `'N'` (default) - CSV files deleted from TRASH folder after archival (status: ARCHIVED_AND_PURGED)
**Benefits of TRASH Retention (TRUE)**:
- Safety net for rollback if archival issues discovered
@@ -311,7 +312,7 @@ Controls TRASH folder retention policy for archived files.
```sql
-- Production: Keep files in TRASH (recommended)
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'Y'
SET IS_KEPT_IN_TRASH = 'Y'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'LM'
AND TABLE_ID LIKE 'LM_%';
@@ -319,20 +320,64 @@ COMMIT;
-- Test environment: Cleanup TRASH to save storage
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'N'
SET IS_KEPT_IN_TRASH = 'N'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'TEST_SOURCE';
COMMIT;
-- Bulk configuration by source
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'Y'
SET IS_KEPT_IN_TRASH = 'Y'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID IN ('CSDB', 'C2D', 'LM');
COMMIT;
```
### IS_WORKFLOW_SUCCESS_REQUIRED Column
## Data Lifecycle Workflow
Controls whether archivization requires `WORKFLOW_SUCCESSFUL='Y'` in A_WORKFLOW_HISTORY. Added in MARS-1409.
**Column**: `A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED` (CHAR(1), DEFAULT 'Y' NOT NULL)
**Values**:
- `'Y'` (default) - Only files with `WORKFLOW_SUCCESSFUL='Y'` are eligible for archivization (standard Airflow+DBT flow)
- `'N'` - Archivization proceeds regardless of workflow completion status (bypass for manual/non-DBT sources)
**Use Cases**:
- `'Y'`: All standard INBOX-validated sources (LM, CSDB, C2D) - ensures only fully-processed files are archived
- `'N'`: Legacy data migrated via `DATA_EXPORTER`, manual uploads, or any source without DBT workflow tracking
**GATHER_TABLE_STAT behavior**:
- `'Y'`: Statistics (file count, row count, byte sum) counted only from files with `WORKFLOW_SUCCESSFUL='Y'`
- `'N'`: Statistics counted from all INGESTED files regardless of workflow outcome
**Configuration Example**:
```sql
-- Standard source: require DBT workflow completion (default)
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET IS_WORKFLOW_SUCCESS_REQUIRED = 'Y'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'LM';
COMMIT;
-- Non-DBT source: bypass workflow check
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET IS_WORKFLOW_SUCCESS_REQUIRED = 'N'
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'MANUAL_UPLOAD';
COMMIT;
-- Or set at configuration time via ADD_SOURCE_FILE_CONFIG
CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
pSourceKey => 'MANUAL',
pSourceFileType => 'INPUT',
pSourceFileId => 'MANUAL_UPLOAD',
pSourceFileDesc => 'Manual data upload without DBT',
pSourceFileNamePattern => 'manual_*.csv',
pTableId => 'MY_TABLE',
pTemplateTableName => 'CT_ET_TEMPLATES.MY_TABLE',
pIsWorkflowSuccessRequired => 'N' -- bypass workflow check
);
```
### Status Tracking in A_SOURCE_FILE_RECEIVED
@@ -348,7 +393,7 @@ INGESTED → ARCHIVED_AND_TRASHED → ARCHIVED_AND_PURGED (optional)
**Status Descriptions**:
- **INGESTED**: File successfully processed through Airflow+DBT, residing in ODS bucket
- **ARCHIVED_AND_TRASHED**: File archived to Parquet in ARCHIVE bucket, CSV retained in TRASH folder (DATA bucket)
- **ARCHIVED_AND_PURGED**: File archived to Parquet, CSV deleted from TRASH folder (when KEEP_IN_TRASH='N')
- **ARCHIVED_AND_PURGED**: File archived to Parquet, CSV deleted from TRASH folder (when IS_KEPT_IN_TRASH='N')
**Associated Columns Updated During Archival**:
```sql
@@ -390,9 +435,9 @@ https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/namespace/b/archive/o/ARC
2.1 TRASH Subfolder (DATA Bucket - File Retention)
├─ Located in DATA bucket (e.g., TRASH/LM/TABLE_NAME)
├─ Stores CSV files after archival to Parquet
├─ Status: ARCHIVED_AND_TRASHED (default, controlled by KEEP_IN_TRASH config)
├─ Status: ARCHIVED_AND_TRASHED (default, controlled by IS_KEPT_IN_TRASH config)
├─ Enables rollback if archival issues occur
└─ Optional cleanup: ARCHIVED_AND_PURGED (when KEEP_IN_TRASH = 'N')
└─ Optional cleanup: ARCHIVED_AND_PURGED (when IS_KEPT_IN_TRASH = 'N')
3. ARCHIVE Bucket (Long-term Storage)
├─ Historical data in Parquet format
@@ -402,24 +447,35 @@ https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/namespace/b/archive/o/ARC
**Key Procedures**:
- `ARCHIVE_TABLE_DATA(pSourceFileConfigKey)` - Main archival procedure using strategy-specific WHERE clause
- TRASH folder retention controlled by `KEEP_IN_TRASH` column in A_SOURCE_FILE_CONFIG
- TRASH folder retention controlled by `IS_KEPT_IN_TRASH` column in A_SOURCE_FILE_CONFIG
- `ARCHIVE_ALL(pSourceFileConfigKey, pSourceKey, pArchiveAll)` - Batch archival with 3-level granularity and error handling
- **Level 3 (Highest Priority)**: Single configuration via `pSourceFileConfigKey`
- **Level 2 (Medium Priority)**: All configurations for source via `pSourceKey`
- **Level 1 (Lowest Priority)**: All configurations system-wide via `pArchiveAll`
- **Error Handling**: Continues processing other tables on individual failures
- **Filtering**: Respects `ARCHIVE_ENABLED='Y'` (skips disabled configurations)
- **Individual TRASH Policy**: Each table's `KEEP_IN_TRASH` setting applied independently
- **Filtering**: Respects `IS_ARCHIVE_ENABLED='Y'` (skips disabled configurations)
- **Individual TRASH Policy**: Each table's `IS_KEPT_IN_TRASH` setting applied independently
- **Summary Reporting**: Returns counts of Archived/Skipped/Failed tables
- `GET_ARCHIVAL_WHERE_CLAUSE` - Returns WHERE clause based on configured strategy
- `GATHER_TABLE_STAT` - Calculates archival statistics using strategy logic
- `GATHER_TABLE_STAT_ALL(pSourceFileConfigKey, pSourceKey, pGatherAll)` - Batch statistics with 3-level granularity
- `RESTORE_FILE_FROM_TRASH(pSourceFileConfigKey, pSourceKey, pRestoreAll)` - Restore archived files from TRASH
- `PURGE_TRASH_FOLDER(pSourceFileConfigKey, pSourceKey, pPurgeAll)` - Purge TRASH folder with 3-level granularity
- `GATHER_TABLE_STAT(pSourceFileConfigKey)` - Calculates archival statistics using strategy logic
- `GATHER_TABLE_STAT_ALL(pSourceFileConfigKey, pSourceKey, pGatherAll, pOnlyEnabled)` - Batch statistics with 3-level granularity
- `pOnlyEnabled` (DEFAULT TRUE): When TRUE, only processes tables with `IS_ARCHIVE_ENABLED='Y'`
- `RESTORE_FILE_FROM_TRASH(pSourceFileReceivedKey, pSourceFileConfigKey, pRestoreAll)` - Restore archived files from TRASH
- `PURGE_TRASH_FOLDER(pSourceFileReceivedKey, pSourceFileConfigKey, pPurgeAll)` - Purge TRASH folder with 3-level granularity
- `GET_VERSION` / `GET_BUILD_INFO` / `GET_VERSION_HISTORY` - Package version and metadata
**Function Wrappers (Python Integration)**:
All key procedures have `FN_*` function overloads returning `PLS_INTEGER` (SQLCODE: 0=success, error code on failure) for Python library integration:
- `FN_ARCHIVE_TABLE_DATA`, `FN_GATHER_TABLE_STAT`, `FN_ARCHIVE_ALL`, `FN_GATHER_TABLE_STAT_ALL`
- `RESTORE_FILE_FROM_TRASH` and `PURGE_TRASH_FOLDER` also have function overloads returning PLS_INTEGER
**Internal Functions** (not callable externally):
- `GET_ARCHIVAL_WHERE_CLAUSE` - Returns WHERE clause based on configured strategy (private)
- `GET_TABLE_STAT` - Retrieves or auto-generates table statistics (private)
**Archival Execution**:
```sql
-- Single table archival (TRASH retention controlled by KEEP_IN_TRASH config)
-- Single table archival (TRASH retention controlled by IS_KEPT_IN_TRASH config)
BEGIN
CT_MRDS.FILE_ARCHIVER.ARCHIVE_TABLE_DATA(
pSourceFileConfigKey => vSourceFileConfigKey
@@ -451,11 +507,11 @@ END;
**Strategy-Based Filtering**:
- Package retrieves ARCHIVAL_STRATEGY from A_SOURCE_FILE_CONFIG
- GET_ARCHIVAL_WHERE_CLAUSE generates appropriate WHERE clause
- Only tables with ARCHIVE_ENABLED = 'Y' are processed
- Only tables with IS_ARCHIVE_ENABLED = 'Y' are processed
- Data matching criteria moved from ODS to ARCHIVE bucket
- CSV files moved to TRASH subfolder in DATA bucket (ODS/ → TRASH/)
- Parquet format with Hive-style partitioning applied to ARCHIVE bucket
- TRASH retention controlled by KEEP_IN_TRASH column in A_SOURCE_FILE_CONFIG
- TRASH retention controlled by IS_KEPT_IN_TRASH column in A_SOURCE_FILE_CONFIG
### Automatic Rollback Mechanism
@@ -465,7 +521,7 @@ FILE_ARCHIVER implements **automatic rollback** to ensure data integrity if arch
1. **Export to ARCHIVE**: Data exported to Parquet format in ARCHIVE bucket
2. **Status Update**: A_SOURCE_FILE_RECEIVED records updated to 'ARCHIVED_AND_TRASHED'
3. **Move to TRASH**: CSV files moved from ODS to TRASH folder (DATA bucket)
4. **Optional Cleanup**: If KEEP_IN_TRASH='N', files deleted from TRASH
4. **Optional Cleanup**: If IS_KEPT_IN_TRASH='N', files deleted from TRASH
**Automatic Rollback Trigger**:
If **any error occurs** during step 3 (Move to TRASH), the system:
@@ -655,7 +711,7 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
DAYS_FOR_ARCHIVE_THRESHOLD
ARCHIVE_THRESHOLD_DAYS
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
ORDER BY A_SOURCE_KEY, SOURCE_FILE_ID, TABLE_ID;
@@ -679,8 +735,8 @@ ORDER BY ARCHIVAL_STRATEGY;
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS',
MINIMUM_AGE_MONTHS = 6,
ARCHIVE_ENABLED = 'Y', -- Enable archival
KEEP_IN_TRASH = 'Y' -- Keep files in TRASH for safety
IS_ARCHIVE_ENABLED = 'Y', -- Enable archival
IS_KEPT_IN_TRASH = 'Y' -- Keep files in TRASH for safety
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'CSDB'
AND TABLE_ID = 'CSDB_DEBT';
@@ -688,13 +744,13 @@ COMMIT;
-- Disable archival temporarily for troubleshooting
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVE_ENABLED = 'N' -- Batch operations will skip this table
SET IS_ARCHIVE_ENABLED = 'N' -- Batch operations will skip this table
WHERE TABLE_ID = 'CSDB_DEBT';
COMMIT;
-- Configure TRASH cleanup for test environment
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'N' -- Delete files from TRASH after archival
SET IS_KEPT_IN_TRASH = 'N' -- Delete files from TRASH after archival
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND SOURCE_FILE_ID = 'TEST_SOURCE';
COMMIT;
@@ -705,21 +761,21 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
ARCHIVE_ENABLED,
KEEP_IN_TRASH
IS_ARCHIVE_ENABLED,
IS_KEPT_IN_TRASH
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
ORDER BY SOURCE_FILE_ID, TABLE_ID;
-- Summary by archival status
SELECT
ARCHIVE_ENABLED,
KEEP_IN_TRASH,
IS_ARCHIVE_ENABLED,
IS_KEPT_IN_TRASH,
COUNT(*) AS TABLE_COUNT
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
GROUP BY ARCHIVE_ENABLED, KEEP_IN_TRASH
ORDER BY ARCHIVE_ENABLED DESC, KEEP_IN_TRASH DESC;
GROUP BY IS_ARCHIVE_ENABLED, IS_KEPT_IN_TRASH
ORDER BY IS_ARCHIVE_ENABLED DESC, IS_KEPT_IN_TRASH DESC;
```
## Release 01 Configuration
@@ -829,11 +885,11 @@ JOIN CT_ODS.A_LOAD_HISTORY LH ON SFR.A_WORKFLOW_HISTORY_KEY = LH.A_WORKFLOW_HIST
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG SFC ON SFR.A_SOURCE_FILE_CONFIG_KEY = SFC.A_SOURCE_FILE_CONFIG_KEY
WHERE SFC.ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS'
AND SFR.PROCESSING_STATUS = 'INGESTED'
AND SFC.ARCHIVE_ENABLED = 'Y'
AND SFC.IS_ARCHIVE_ENABLED = 'Y'
ORDER BY LH.LOAD_START;
-- Note: MINIMUM_AGE_MONTHS archives immediately (threshold-independent)
-- If files not archived, check ARCHIVE_ENABLED='Y' and run ARCHIVE_TABLE_DATA
-- If files not archived, check IS_ARCHIVE_ENABLED='Y' and run ARCHIVE_TABLE_DATA
```
**Scenario B**: **THRESHOLD_BASED** or **HYBRID** strategy not archiving
@@ -843,9 +899,9 @@ SELECT
SFC.SOURCE_FILE_ID,
SFC.TABLE_ID,
SFC.ARCHIVAL_STRATEGY,
SFC.FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THRESHOLD,
SFC.ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THRESHOLD,
SFC.BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THRESHOLD,
SFC.ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THRESHOLD,
SFC.ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THRESHOLD,
SFC.ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THRESHOLD,
COUNT(SFR.A_SOURCE_FILE_RECEIVED_KEY) AS CURRENT_FILES,
SUM(SFR.TOTAL_RECORDS) AS CURRENT_ROWS,
SUM(SFR.FILE_SIZE_BYTES) AS CURRENT_BYTES
@@ -854,13 +910,13 @@ LEFT JOIN CT_MRDS.A_SOURCE_FILE_RECEIVED SFR
ON SFC.A_SOURCE_FILE_CONFIG_KEY = SFR.A_SOURCE_FILE_CONFIG_KEY
AND SFR.PROCESSING_STATUS = 'INGESTED'
WHERE SFC.ARCHIVAL_STRATEGY IN ('THRESHOLD_BASED', 'HYBRID')
AND SFC.ARCHIVE_ENABLED = 'Y'
AND SFC.IS_ARCHIVE_ENABLED = 'Y'
AND SFC.A_SOURCE_FILE_CONFIG_KEY = :yourConfigKey
GROUP BY
SFC.SOURCE_FILE_ID, SFC.TABLE_ID, SFC.ARCHIVAL_STRATEGY,
SFC.FILES_COUNT_OVER_ARCHIVE_THRESHOLD,
SFC.ROWS_COUNT_OVER_ARCHIVE_THRESHOLD,
SFC.BYTES_SUM_OVER_ARCHIVE_THRESHOLD;
SFC.ARCHIVE_THRESHOLD_FILES_COUNT,
SFC.ARCHIVE_THRESHOLD_ROWS_COUNT,
SFC.ARCHIVE_THRESHOLD_BYTES_SUM;
-- Expected: At least ONE threshold (FILE/ROW/BYTE) must be exceeded
-- If no threshold exceeded, archival will NOT trigger (threshold-dependent behavior)
@@ -903,7 +959,7 @@ WHERE object_name LIKE 'ARCHIVE/LM/STANDING_FACILITIES/PARTITION_YEAR=2026/PARTI
**Symptoms**: Files not deleted from TRASH after archival
**Cause**: Configuration has `KEEP_IN_TRASH='Y'` (retain files in TRASH)
**Cause**: Configuration has `IS_KEPT_IN_TRASH='Y'` (retain files in TRASH)
**Verification**:
```sql
@@ -911,8 +967,8 @@ WHERE object_name LIKE 'ARCHIVE/LM/STANDING_FACILITIES/PARTITION_YEAR=2026/PARTI
SELECT
SOURCE_FILE_ID,
TABLE_ID,
KEEP_IN_TRASH,
CASE KEEP_IN_TRASH
IS_KEPT_IN_TRASH,
CASE IS_KEPT_IN_TRASH
WHEN 'Y' THEN 'Files RETAINED in TRASH (manual purge required)'
WHEN 'N' THEN 'Files DELETED immediately after archival'
END AS TRASH_BEHAVIOR
@@ -924,7 +980,7 @@ WHERE TABLE_ID = 'YOUR_TABLE';
```sql
-- Option A: Change configuration to auto-delete (permanent change)
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'N' -- Auto-delete from TRASH after archival
SET IS_KEPT_IN_TRASH = 'N' -- Auto-delete from TRASH after archival
WHERE TABLE_ID = 'YOUR_TABLE';
COMMIT;
@@ -997,7 +1053,7 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
DAYS_FOR_ARCHIVE_THRESHOLD
ARCHIVE_THRESHOLD_DAYS
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE TABLE_ID = 'YOUR_TABLE';
@@ -1027,13 +1083,15 @@ BEGIN
WHERE TABLE_ID = 'YOUR_TABLE'
AND ROWNUM = 1;
vWhereClause := CT_MRDS.FILE_ARCHIVER.GET_ARCHIVAL_WHERE_CLAUSE(vConfig);
DBMS_OUTPUT.PUT_LINE('WHERE Clause: ' || vWhereClause);
-- Note: GET_ARCHIVAL_WHERE_CLAUSE is a private function.
-- To test WHERE clause logic, check A_PROCESS_LOG entries from ARCHIVE_TABLE_DATA
-- which logs the generated WHERE clause at INFO level.
DBMS_OUTPUT.PUT_LINE('Config: ' || vConfig.ARCHIVAL_STRATEGY || ', MIN_AGE=' || vConfig.MINIMUM_AGE_MONTHS);
END;
/
```
#### Issue 3: Package Compilation Errors After Upgrade
#### Issue 7: Package Compilation Errors After Upgrade
**Symptoms**: FILE_ARCHIVER package shows INVALID status
@@ -1087,7 +1145,7 @@ SELECT
SFR.FILE_SIZE_BYTES,
SFR.UPDATED_AT AS ARCHIVED_AT,
TRUNC(SYSDATE - SFR.UPDATED_AT) AS DAYS_IN_TRASH,
SFC.KEEP_IN_TRASH AS TRASH_POLICY
SFC.IS_KEPT_IN_TRASH AS TRASH_POLICY
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED SFR
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG SFC ON SFR.A_SOURCE_FILE_CONFIG_KEY = SFC.A_SOURCE_FILE_CONFIG_KEY
WHERE SFR.PROCESSING_STATUS = 'ARCHIVED_AND_TRASHED'
@@ -1102,18 +1160,18 @@ SELECT
SFC.SOURCE_FILE_ID,
SFC.TABLE_ID,
SFC.ARCHIVAL_STRATEGY,
SFC.ARCHIVE_ENABLED,
SFC.KEEP_IN_TRASH,
SFC.IS_ARCHIVE_ENABLED,
SFC.IS_KEPT_IN_TRASH,
COUNT(CASE WHEN SFR.PROCESSING_STATUS = 'INGESTED' THEN 1 END) AS PENDING_ARCHIVE,
COUNT(CASE WHEN SFR.PROCESSING_STATUS = 'ARCHIVED_AND_TRASHED' THEN 1 END) AS IN_TRASH,
COUNT(CASE WHEN SFR.PROCESSING_STATUS = 'ARCHIVED_AND_PURGED' THEN 1 END) AS PURGED,
MAX(SFR.UPDATED_AT) FILTER (WHERE SFR.PROCESSING_STATUS LIKE 'ARCHIVED%') AS LAST_ARCHIVAL
MAX(CASE WHEN SFR.PROCESSING_STATUS LIKE 'ARCHIVED%' THEN SFR.UPDATED_AT END) AS LAST_ARCHIVAL
FROM CT_MRDS.A_SOURCE_FILE_CONFIG SFC
LEFT JOIN CT_MRDS.A_SOURCE_FILE_RECEIVED SFR ON SFC.A_SOURCE_FILE_CONFIG_KEY = SFR.A_SOURCE_FILE_CONFIG_KEY
WHERE SFC.SOURCE_FILE_TYPE = 'INPUT'
GROUP BY
SFC.SOURCE_FILE_ID, SFC.TABLE_ID, SFC.ARCHIVAL_STRATEGY,
SFC.ARCHIVE_ENABLED, SFC.KEEP_IN_TRASH
SFC.IS_ARCHIVE_ENABLED, SFC.IS_KEPT_IN_TRASH
ORDER BY SFC.SOURCE_FILE_ID, SFC.TABLE_ID;
```
@@ -1134,7 +1192,7 @@ FROM CT_MRDS.A_SOURCE_FILE_CONFIG SFC
JOIN CT_MRDS.A_SOURCE_FILE_RECEIVED SFR ON SFC.A_SOURCE_FILE_CONFIG_KEY = SFR.A_SOURCE_FILE_CONFIG_KEY
JOIN CT_ODS.A_LOAD_HISTORY LH ON SFR.A_WORKFLOW_HISTORY_KEY = LH.A_WORKFLOW_HISTORY_KEY
WHERE SFC.ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS'
AND SFC.ARCHIVE_ENABLED = 'Y'
AND SFC.IS_ARCHIVE_ENABLED = 'Y'
AND SFR.PROCESSING_STATUS = 'INGESTED'
AND LH.LOAD_START < ADD_MONTHS(TRUNC(SYSDATE, 'MM'), -SFC.MINIMUM_AGE_MONTHS)
GROUP BY SFC.SOURCE_FILE_ID, SFC.TABLE_ID, SFC.MINIMUM_AGE_MONTHS
@@ -1173,20 +1231,30 @@ SELECT
ROUND(SUM(SFR.FILE_SIZE_BYTES) / 1024 / 1024 / 1024, 2) AS SIZE_GB,
MIN(SFR.UPDATED_AT) AS OLDEST_IN_TRASH,
MAX(SFR.UPDATED_AT) AS NEWEST_IN_TRASH,
SFC.KEEP_IN_TRASH AS POLICY
SFC.IS_KEPT_IN_TRASH AS POLICY
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED SFR
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG SFC ON SFR.A_SOURCE_FILE_CONFIG_KEY = SFC.A_SOURCE_FILE_CONFIG_KEY
WHERE SFR.PROCESSING_STATUS = 'ARCHIVED_AND_TRASHED'
GROUP BY SFC.SOURCE_FILE_ID, SFC.KEEP_IN_TRASH
GROUP BY SFC.SOURCE_FILE_ID, SFC.IS_KEPT_IN_TRASH
ORDER BY SIZE_GB DESC;
```
## Version History
### v3.3.0 (Current - 2026-02-11)
### v3.4.0 (Current - 2026-03-17)
- **MARS-1409**: Added `IS_WORKFLOW_SUCCESS_REQUIRED` flag to A_SOURCE_FILE_CONFIG
- `'Y'` (default) = archivization requires `WORKFLOW_SUCCESSFUL='Y'` in A_WORKFLOW_HISTORY (standard Airflow+DBT flow)
- `'N'` = archive regardless of workflow status (bypass for manual/non-DBT sources)
- `IS_WORKFLOW_SUCCESS_REQUIRED` stored in A_TABLE_STAT and A_TABLE_STAT_HIST at statistics gather time
- GATHER_TABLE_STAT: conditional `WORKFLOW_SUCCESSFUL='Y'` filter controlled by the flag
- ARCHIVE_TABLE_DATA: conditional `WORKFLOW_SUCCESSFUL='Y'` filter controlled by the flag
- Added `pIsWorkflowSuccessRequired` parameter to FILE_MANAGER.ADD_SOURCE_FILE_CONFIG
- FILE_MANAGER updated to v3.6.2+
### v3.3.0 (2026-02-11)
- **BREAKING CHANGE**: Removed `pKeepInTrash` parameter from ARCHIVE_TABLE_DATA
- Added `ARCHIVE_ENABLED` column to A_SOURCE_FILE_CONFIG for selective archiving control
- Added `KEEP_IN_TRASH` column to A_SOURCE_FILE_CONFIG (replaces pKeepInTrash parameter)
- Added `IS_ARCHIVE_ENABLED` column to A_SOURCE_FILE_CONFIG for selective archiving control
- Added `IS_KEPT_IN_TRASH` column to A_SOURCE_FILE_CONFIG (replaces pKeepInTrash parameter)
- Added batch procedures with 3-level granularity (config/source/all):
- ARCHIVE_ALL - Batch archival procedure
- GATHER_TABLE_STAT_ALL - Batch statistics procedure
@@ -1226,7 +1294,7 @@ ORDER BY SIZE_GB DESC;
### v2.0.0 (Legacy)
- Initial FILE_ARCHIVER package
- THRESHOLD_BASED archival only
- Fixed DAYS_FOR_ARCHIVE_THRESHOLD configuration
- Fixed ARCHIVE_THRESHOLD_DAYS configuration
## Related Documentation
@@ -1337,7 +1405,7 @@ ORDER BY SIZE_GB DESC;
### TRASH Folder Retention Best Practices
1. **Default Behavior (KEEP_IN_TRASH = 'Y' - Recommended)**:
1. **Default Behavior (IS_KEPT_IN_TRASH = 'Y' - Recommended)**:
- Keeps CSV files in TRASH folder after archival
- Provides safety net for rollback if archival issues occur
- Supports compliance and audit requirements
@@ -1346,11 +1414,11 @@ ORDER BY SIZE_GB DESC;
- Configuration:
```sql
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'Y'
SET IS_KEPT_IN_TRASH = 'Y'
WHERE SOURCE_FILE_TYPE = 'INPUT' AND TABLE_ID = 'YOUR_TABLE';
```
2. **TRASH Cleanup (KEEP_IN_TRASH = 'N')**:
2. **TRASH Cleanup (IS_KEPT_IN_TRASH = 'N')**:
- Deletes CSV files from TRASH folder after successful archival
- Reduces storage costs in DATA bucket
- Status: ARCHIVED_AND_PURGED
@@ -1358,7 +1426,7 @@ ORDER BY SIZE_GB DESC;
- Configuration:
```sql
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET KEEP_IN_TRASH = 'N'
SET IS_KEPT_IN_TRASH = 'N'
WHERE SOURCE_FILE_TYPE = 'INPUT' AND TABLE_ID = 'YOUR_TABLE';
```
@@ -1368,7 +1436,7 @@ ORDER BY SIZE_GB DESC;
SELECT
SOURCE_FILE_NAME,
PROCESSING_STATUS,
ARCH_FILE_NAME,
ARCH_PATH,
PARTITION_YEAR,
PARTITION_MONTH
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
@@ -1391,3 +1459,4 @@ ORDER BY SIZE_GB DESC;
```