Compare commits

...

69 Commits

Author SHA1 Message Date
Grzegorz Michalski
a35e28042b feat(FILE_ARCHIVER): Improve archival logic and error handling in FILE_ARCHIVER procedures 2026-03-23 11:48:37 +01:00
Grzegorz Michalski
92feb95ae0 feat(FILE_ARCHIVER): Enhance documentation with new function details and clarify private functions 2026-03-20 13:37:05 +01:00
Grzegorz Michalski
74b8857096 feat(FILE_ARCHIVER): Rename IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH for consistency in configuration 2026-03-20 13:26:37 +01:00
Grzegorz Michalski
24997b1583 feat(MARS-1409): Add prerequisite checks for MARS-1409 objects in installation script 2026-03-20 13:13:26 +01:00
Grzegorz Michalski
eb9b2bc38b feat(FILE_MANAGER): Rename pIsKeepInTrash to pIsKeptInTrash for consistency in parameter naming 2026-03-19 13:29:51 +01:00
Grzegorz Michalski
2ea708a694 Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-19 12:26:27 +01:00
Grzegorz Michalski
12c58f32a3 feat(MARS-1409): Rename IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH across relevant files and update related logic 2026-03-19 12:23:29 +01:00
Grzegorz Michalski
811df6e8b1 feat(FILE_ARCHIVER): Enhance logging messages to include detailed parameters for better error tracking 2026-03-19 12:14:35 +01:00
Grzegorz Michalski
c2e9409e55 feat(FILE_ARCHIVER): Enhance logging by adding parameters to log events for better traceability 2026-03-19 11:51:37 +01:00
Grzegorz Michalski
c96bf2051f Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-19 11:13:37 +01:00
Grzegorz Michalski
5d0e03d7ad feat(MARS-1409): Add DATA_EXPORTER package installation and rollback scripts 2026-03-19 11:13:09 +01:00
Grzegorz Michalski
ffd6c7eeae feat(ENV_MANAGER): Add new error codes for workflow key validation and update package version to 3.3.0
refactor(FILE_MANAGER): Remove redundant error logging for unknown errors
2026-03-19 11:13:02 +01:00
Grzegorz Michalski
bbdf008125 Add DATA_EXPORTER package for comprehensive data export capabilities
- Introduced CT_MRDS.DATA_EXPORTER package to facilitate data exports in CSV and Parquet formats.
- Implemented support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
- Added versioning and detailed version history for tracking changes and improvements.
- Included main export procedures: EXPORT_TABLE_DATA, EXPORT_TABLE_DATA_BY_DATE, and EXPORT_TABLE_DATA_TO_CSV_BY_DATE.
- Enhanced parallel processing capabilities for improved performance during data exports.
2026-03-19 10:50:28 +01:00
Grzegorz Michalski
396e7416f6 feat(FILE_ARCHIVER): Update SQL query in ARCHIVE_TABLE_DATA for improved archival statistics and column order consistency 2026-03-19 09:37:42 +01:00
Grzegorz Michalski
0ed75875ac Refactor MARS-1409: Rollback changes to A_SOURCE_FILE_RECEIVED and related tables
- Dropped A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED with data preservation.
- Removed unnecessary checks for column existence during rollback.
- Updated A_SOURCE_FILE_CONFIG, A_TABLE_STAT, and A_TABLE_STAT_HIST to their pre-MARS-1409 structures, excluding new columns added in MARS-1409.
- Adjusted FILE_ARCHIVER package to reflect changes in statistics handling and archival triggers.
- Revised rollback script to ensure proper order of operations for restoring previous versions of packages and tables.
2026-03-19 08:46:49 +01:00
Grzegorz Michalski
a7db9b67bc Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-03-18 18:19:19 +01:00
Grzegorz Michalski
ce9b6eeff6 feat(FILE_MANAGER): Update package version to 3.6.3 and enhance ADD_SOURCE_FILE_CONFIG with new parameters for archival control
- Bump package version to 3.6.3 and update build date.
- Add new parameters: pIsArchiveEnabled, pIsKeepInTrash, pArchivalStrategy, pMinimumAgeMonths to ADD_SOURCE_FILE_CONFIG.
- Include pIsWorkflowSuccessRequired parameter to control workflow success requirement for archival.
- Update version history to reflect changes.

feat(A_SOURCE_FILE_CONFIG): Modify table structure to include new archival control flags

- Add IS_WORKFLOW_SUCCESS_REQUIRED column to A_SOURCE_FILE_CONFIG for workflow bypass functionality.
- Update constraints and comments for new columns.
- Ensure backward compatibility with default values.

fix(A_TABLE_STAT, A_TABLE_STAT_HIST): Extend table structures to accommodate new workflow success tracking

- Add IS_WORKFLOW_SUCCESS_REQUIRED column to both A_TABLE_STAT and A_TABLE_STAT_HIST.
- Update comments to clarify the purpose of new columns.

docs(FILE_ARCHIVER_Guide): Revise documentation to reflect new archival features and configurations

- Document new IS_WORKFLOW_SUCCESS_REQUIRED flag and its implications for archival processes.
- Update examples and configurations to align with recent changes in the database schema.
- Ensure clarity on archival strategies and their configurations.
2026-03-18 18:19:04 +01:00
Grzegorz Michalski
0725119b45 feat: Enhance MARS-1409 post-hook scripts to include checks for empty ODS tables and update installation script for workflow key diagnosis 2026-03-17 12:10:18 +01:00
Grzegorz Michalski
896e67bcb9 feat: Refactor A_SOURCE_FILE_CONFIG table structure and update comments for clarity 2026-03-17 10:58:01 +01:00
Grzegorz Michalski
ad5a6f393a feat: Update installation script to reflect expected duration for MARS-1409 post-hook process 2026-03-17 09:54:42 +01:00
Grzegorz Michalski
a4ac132b76 feat: Implement MARS-1409 changes to add ARCHIVAL_STRATEGY and ARCH_MINIMUM_AGE_MONTHS columns to A_TABLE_STAT and A_TABLE_STAT_HIST, and update FILE_ARCHIVER for handling these new fields 2026-03-17 08:23:14 +01:00
Grzegorz Michalski
6468d12349 minor 2026-03-13 13:51:53 +01:00
Grzegorz Michalski
fe0f7bce18 feat: Enhance FILE_ARCHIVER package to handle empty ODS bucket scenarios with improved statistics initialization 2026-03-13 13:34:38 +01:00
Grzegorz Michalski
6b2f60f413 feat: Update FILE_ARCHIVER package to version 3.3.1 with improved handling for empty ODS bucket scenarios 2026-03-13 11:40:19 +01:00
Grzegorz Michalski
ca11debd93 minor 2026-03-13 11:35:11 +01:00
Grzegorz Michalski
24e6bce18c minor changes 2026-03-13 11:34:59 +01:00
Grzegorz Michalski
aa03dd1616 feat: Update FILE_MANAGER package to version 3.6.1 with fixes for CHAR/NCHAR/NVARCHAR2 column definitions 2026-03-13 09:11:28 +01:00
Grzegorz Michalski
9190681051 MARS-1409-POSTHOOK 2026-03-13 09:08:44 +01:00
Grzegorz Michalski
096994d514 feat: Add diagnostic script for workflow key status in MARS-1409 post-hook 2026-03-13 08:43:14 +01:00
Grzegorz Michalski
1385bfb9e7 feat: Implement MARS-1409 post-hook for backfilling A_WORKFLOW_HISTORY_KEY
- Added .gitignore to exclude temporary folders.
- Created SQL script to update existing A_WORKFLOW_HISTORY_KEY in A_SOURCE_FILE_RECEIVED.
- Implemented rollback script to clear backfilled A_WORKFLOW_HISTORY_KEY values.
- Added README.md for installation and usage instructions.
- Developed master installation and rollback scripts for MARS-1409 post-hook.
- Verified installation and rollback processes with detailed checks.
- Updated trigger logic to manage workflow history updates.
- Ensured proper version tracking and verification for related packages.
2026-03-13 08:30:32 +01:00
Grzegorz Michalski
7d2fb34ad9 MARS-1005-PREHOOK 2026-03-12 08:51:15 +01:00
Grzegorz Michalski
202b535f9f Update DATA_EXPORTER package to v2.17.0: Fix RFC 4180 compliance and Parquet format corruption 2026-03-12 08:50:08 +01:00
Grzegorz Michalski
5ba6c30fda MARS-1005-PREHOOK 2026-03-11 10:34:47 +01:00
Grzegorz Michalski
64a4b9a2f0 Refactor rollback script to delete specific legacy files and adjust object URI construction 2026-03-09 11:46:01 +01:00
Grzegorz Michalski
dec3e7137e Refactor rollback script to delete only files registered by MARS-1005 and improve output messages 2026-03-09 10:24:24 +01:00
Grzegorz Michalski
0ecc119ee9 Refactor data integrity verification script to use A_ETL_LOAD_SET_FK instead of A_WORKFLOW_HISTORY_KEY 2026-03-09 09:31:30 +01:00
Grzegorz Michalski
182e6240d3 Update export script comments for clarity and consistency 2026-03-09 09:25:20 +01:00
Grzegorz Michalski
b81e524351 Refactor MARS-1005 scripts for OU_TOP legacy data export and rollback
- Updated SQL scripts to verify data integrity for 6 OU_TOP.LEGACY_* tables instead of 3 C2D MPEC tables.
- Modified rollback script to delete exported CSV files from ODS/TOP/ bucket paths.
- Enhanced verification script to check for remaining files and cloud bucket contents specific to MARS-1005.
- Adjusted install script to reflect changes in target tables and their corresponding paths in the ODS bucket.
- Updated README to include instructions for the new MARS-1005 installation and rollback processes.
2026-03-06 14:34:12 +01:00
Grzegorz Michalski
73e99b6e76 MARS-1005 2026-03-06 12:06:18 +01:00
Grzegorz Michalski
113ea0a618 Refactor MARS-1409 SQL scripts for workflow history key management
- Added checks for existing columns before adding or dropping A_WORKFLOW_HISTORY_KEY in relevant scripts to prevent errors.
- Updated rollback scripts to ensure proper restoration of previous states, including recompilation of dependent packages.
- Introduced a diagnostic script to assess the status of workflow keys against ODS tables, providing detailed reporting on discrepancies.
- Adjusted trigger definitions to accommodate new workflow names and ensure correct handling of workflow history.
- Modified master rollback script to streamline the rollback process and improve clarity in step descriptions.
2026-03-05 12:33:59 +01:00
Grzegorz Michalski
59e18d9b35 Add error handling for TRG_A_WORKFLOW_HISTORY trigger installation 2026-03-04 10:16:35 +01:00
Grzegorz Michalski
a58a5ae82a ignore export files 2026-03-03 09:48:43 +01:00
Grzegorz Michalski
b537719b64 added template tables 2026-03-03 09:47:24 +01:00
Grzegorz Michalski
4de14b64fb rmemove unneeded 2026-03-03 09:46:06 +01:00
Grzegorz Michalski
36a04dde04 MARS-1409 2026-03-02 14:26:12 +01:00
Grzegorz Michalski
cad6e63479 exported files from dev 2026-03-02 13:51:59 +01:00
Grzegorz Michalski
7db10725a0 MARS-1409 2026-03-02 10:15:22 +01:00
Grzegorz Michalski
a13a9d415f remove autthor 2026-03-02 10:14:53 +01:00
Grzegorz Michalski
1c6f552df9 szkielet paczki MARS-1409 2026-02-27 07:32:21 +01:00
Grzegorz Michalski
e9d4056451 Merge develop into main - DATA_EXPORTER v2.14.0 optimization 2026-02-26 20:39:33 +01:00
Grzegorz Michalski
60b218d211 Hotfix - Add filtering for successful workflows in archival queries 2026-02-26 20:37:03 +01:00
Grzegorz Michalski
819b6f7880 Update version history in FILE_MANAGER package to include changes for MARS-828 compatibility 2026-02-25 09:50:30 +01:00
Grzegorz Michalski
c68d5bfe2c feat(MARS-835): Enhance EXPORT_PARTITION_PARALLEL with pTaskName parameter for session isolation and optimize chunk retrieval logic 2026-02-25 09:49:25 +01:00
Grzegorz Michalski
c607bbe26e Update CSDB DEBT tables to set MINIMUM_AGE_MONTHS to 0 for current month only 2026-02-25 07:00:04 +01:00
Grzegorz Michalski
1569237306 Add T2_PEAK_LIQUIDITY_NEED template table with column comments 2026-02-24 19:20:28 +01:00
Grzegorz Michalski
472a724fe0 Update FILE_MANAGER package to version 3.5.1; fix TIMESTAMP field syntax for SQL*Loader compatibility and add T2_PEAK_LIQUIDITY_NEED template table 2026-02-24 19:19:45 +01:00
Grzegorz Michalski
04d4f6ac02 feat(MARS-835): Update export scripts to support HIST-only strategy, including verification and rollback adjustments 2026-02-24 19:18:16 +01:00
Grzegorz Michalski
ca5d8b320c feat(MARS-835): Enhance DELETE_FAILED_EXPORT_FILE procedure to delete all matching files before retrying export, preventing data duplication in parallel processing 2026-02-24 09:38:09 +01:00
Grzegorz Michalski
2605896469 refactor(EXPORT): Improve formatting and logging 2026-02-24 08:22:42 +01:00
Grzegorz Michalski
b588b0bb72 Add FILE_MANAGER package installation and rollback scripts; update installation process for compatibility with MARS-828 2026-02-23 09:14:20 +01:00
Grzegorz Michalski
6060f93fde wk 2026-02-20 15:23:12 +01:00
Grzegorz Michalski
99aca3af40 wk1 2026-02-20 15:22:58 +01:00
Grzegorz Michalski
1089184367 wk1 2026-02-20 15:18:08 +01:00
Grzegorz Michalski
ff034fcd68 MARS-1057 2026-02-20 15:08:18 +01:00
Grzegorz Michalski
b85172ae84 wk1 2026-02-20 14:25:19 +01:00
Grzegorz Michalski
577c94f363 Zmiana nazw kolumn
ARCHIVE_ENABLED → IS_ARCHIVE_ENABLED (konwencja boolean)
KEEP_IN_TRASH → IS_KEEP_IN_TRASH (konwencja boolean)
2026-02-20 11:56:41 +01:00
Grzegorz Michalski
11723f6c88 dokumentacja 2026-02-20 11:34:58 +01:00
Grzegorz Michalski
b63be15f5d Merge branch 'main' of https://git.itbi.mywire.org/admin/mars 2026-02-20 10:18:33 +01:00
Grzegorz Michalski
28972e7428 fix(README): Correct typo in active tasks and update installation commands for MARS-1057 2026-02-20 10:18:25 +01:00
335 changed files with 38820 additions and 1206 deletions

2
.gitignore vendored
View File

@@ -19,6 +19,8 @@ issues/
ehthumbs.db
Thumbs.db
MARS_Packages/mrds_elt-dev-database/mrds_elt-dev-database/database/CT_MRDS/export/*
MARS_Packages/REL01/MARS-1056/confluence/
MARS_Packages/REL01/MARS-1056/log/
MARS_Packages/REL01/MARS-1046/confluence/

View File

@@ -24,7 +24,9 @@ BEGIN
pKeyColumnName => 'A_ETL_LOAD_SET_FK',
pBucketArea => 'ARCHIVE',
pFolderName => 'ARCHIVE/LM/LM_TTS_HEADER',
pParallelDegree => 1, pTemplateTableName => 'CT_ET_TEMPLATES.LM_TTS_HEADER', pJobClass => 'high'
pParallelDegree => 1,
pTemplateTableName => 'CT_ET_TEMPLATES.LM_TTS_HEADER',
pJobClass => 'high'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: LEGACY_TTS_HEADER exported');
EXCEPTION
@@ -44,7 +46,9 @@ BEGIN
pKeyColumnName => 'A_ETL_LOAD_SET_FK',
pBucketArea => 'ARCHIVE',
pFolderName => 'ARCHIVE/LM/LM_TTS_ITEM',
pParallelDegree => 1, pTemplateTableName => 'CT_ET_TEMPLATES.LM_TTS_ITEM', pJobClass => 'high'
pParallelDegree => 1,
pTemplateTableName => 'CT_ET_TEMPLATES.LM_TTS_ITEM',
pJobClass => 'high'
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: LEGACY_TTS_ITEM exported');
EXCEPTION

View File

@@ -11,8 +11,8 @@ PROMPT ========================================
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD (
ARCHIVAL_STRATEGY VARCHAR2(30) DEFAULT 'THRESHOLD_BASED' NOT NULL,
MINIMUM_AGE_MONTHS NUMBER(3) DEFAULT NULL,
ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
KEEP_IN_TRASH CHAR(1) DEFAULT 'Y' NOT NULL
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEEP_IN_TRASH CHAR(1) DEFAULT 'Y' NOT NULL
);
-- Add check constraints
@@ -22,10 +22,10 @@ ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD CONSTRAINT
);
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD CONSTRAINT
CHK_ARCHIVE_ENABLED CHECK (ARCHIVE_ENABLED IN ('Y', 'N'));
CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N'));
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD CONSTRAINT
CHK_KEEP_IN_TRASH CHECK (KEEP_IN_TRASH IN ('Y', 'N'));
CHK_IS_KEEP_IN_TRASH CHECK (IS_KEEP_IN_TRASH IN ('Y', 'N'));
-- Add comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
@@ -34,10 +34,10 @@ COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months for archival (used with MINIMUM_AGE_MONTHS or HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_ENABLED IS
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process. Added in MARS-828 v3.3.0';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.KEEP_IN_TRASH IS
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy. Added in MARS-828 v3.3.0';
-- Verify columns added
@@ -50,7 +50,7 @@ SELECT
FROM all_tab_columns
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_CONFIG'
AND column_name IN ('ARCHIVAL_STRATEGY', 'MINIMUM_AGE_MONTHS', 'ARCHIVE_ENABLED', 'KEEP_IN_TRASH')
AND column_name IN ('ARCHIVAL_STRATEGY', 'MINIMUM_AGE_MONTHS', 'IS_ARCHIVE_ENABLED', 'IS_KEEP_IN_TRASH')
ORDER BY column_id;
PROMPT ========================================

View File

@@ -0,0 +1,49 @@
-- MARS-828: Rename threshold columns for consistency
-- Author: Grzegorz Michalski
-- Date: 2026-01-28
-- Description: Renames threshold columns to use consistent ARCHIVE_THRESHOLD_* prefix pattern
-- Old naming was inconsistent (DAYS_FOR vs FILES_COUNT_OVER)
-- New naming groups all threshold columns with common prefix
PROMPT ========================================
PROMPT MARS-828: Renaming threshold columns for consistency
PROMPT ========================================
-- Rename threshold columns to consistent ARCHIVE_THRESHOLD_* pattern
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN DAYS_FOR_ARCHIVE_THRESHOLD TO ARCHIVE_THRESHOLD_DAYS;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN FILES_COUNT_OVER_ARCHIVE_THRESHOLD TO ARCHIVE_THRESHOLD_FILES_COUNT;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN BYTES_SUM_OVER_ARCHIVE_THRESHOLD TO ARCHIVE_THRESHOLD_BYTES_SUM;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN ROWS_COUNT_OVER_ARCHIVE_THRESHOLD TO ARCHIVE_THRESHOLD_ROWS_COUNT;
-- Verify column renames
PROMPT ========================================
PROMPT Verifying threshold column renames...
PROMPT ========================================
SELECT
column_name,
data_type,
data_length
FROM all_tab_columns
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_CONFIG'
AND column_name LIKE 'ARCHIVE_THRESHOLD%'
ORDER BY column_id;
PROMPT ========================================
PROMPT Expected columns:
PROMPT ARCHIVE_THRESHOLD_DAYS
PROMPT ARCHIVE_THRESHOLD_FILES_COUNT
PROMPT ARCHIVE_THRESHOLD_BYTES_SUM
PROMPT ARCHIVE_THRESHOLD_ROWS_COUNT
PROMPT ========================================
PROMPT Threshold columns renamed successfully
PROMPT ========================================

View File

@@ -0,0 +1,160 @@
-- =====================================================================
-- Script: 01b_MARS_828_add_column_comments.sql
-- MARS Issue: MARS-828
-- Author: Grzegorz Michalski
-- Date: 2026-02-20
-- Purpose: Add comprehensive column comments for A_SOURCE_FILE_CONFIG and A_SOURCE_FILE_RECEIVED tables
-- Description: Documents all columns to improve database maintainability and user understanding
-- =====================================================================
PROMPT ========================================
PROMPT MARS-828: Adding comprehensive column comments
PROMPT ========================================
-- =====================================================================
-- A_SOURCE_FILE_CONFIG Column Comments
-- =====================================================================
PROMPT Adding column comments for A_SOURCE_FILE_CONFIG...
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
-- =====================================================================
-- A_SOURCE_FILE_RECEIVED Column Comments
-- =====================================================================
PROMPT Adding column comments for A_SOURCE_FILE_RECEIVED...
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY IS
'Primary key - unique identifier for received file record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY IS
'Foreign key to A_SOURCE_FILE_CONFIG - links file to its configuration and processing rules';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME IS
'Full object name/path of the received file in OCI Object Storage (includes INBOX prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CHECKSUM IS
'MD5 checksum of file content for integrity verification and duplicate detection';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CREATED IS
'Timestamp with timezone when file was created/uploaded to Object Storage (from DBMS_CLOUD.LIST_OBJECTS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.BYTES IS
'File size in bytes';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE IS
'Date when file was registered in the system (extracted from CREATED timestamp)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS IS
'Current processing status: RECEIVED → VALIDATED → READY_FOR_INGESTION → INGESTED → ARCHIVED_AND_TRASHED → ARCHIVED_AND_PURGED';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME IS
'Name of temporary external table created for file validation (dropped after validation)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_YEAR IS
'Year partition value (YYYY format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_MONTH IS
'Month partition value (MM format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.ARCH_PATH IS
'Archive directory prefix in ARCHIVE bucket containing archived Parquet files (supports multiple files from parallel DBMS_CLOUD.EXPORT_DATA)';
-- =====================================================================
-- Verification
-- =====================================================================
PROMPT
PROMPT Verifying column comments...
PROMPT
SELECT
table_name,
COUNT(*) as total_columns,
COUNT(comments) as documented_columns,
COUNT(*) - COUNT(comments) as undocumented_columns
FROM all_col_comments
WHERE owner = 'CT_MRDS'
AND table_name IN ('A_SOURCE_FILE_CONFIG', 'A_SOURCE_FILE_RECEIVED')
GROUP BY table_name
ORDER BY table_name;
PROMPT
PROMPT Detailed column documentation status:
PROMPT
SELECT
table_name,
column_name,
CASE WHEN comments IS NULL THEN 'MISSING' ELSE 'OK' END as comment_status
FROM all_col_comments
WHERE owner = 'CT_MRDS'
AND table_name IN ('A_SOURCE_FILE_CONFIG', 'A_SOURCE_FILE_RECEIVED')
ORDER BY table_name, column_name;
PROMPT
PROMPT ========================================
PROMPT Column comments added successfully
PROMPT ========================================
PROMPT A_SOURCE_FILE_CONFIG: All 20 columns documented
PROMPT A_SOURCE_FILE_RECEIVED: All 12 columns documented
PROMPT ========================================

View File

@@ -59,9 +59,23 @@ WHERE owner = 'CT_MRDS'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
-- 5. Check for compilation errors
-- 5. Check FILE_MANAGER package compilation status
PROMPT
PROMPT 5. Checking for compilation errors...
PROMPT 5. Checking FILE_MANAGER package status...
SELECT
object_name,
object_type,
status,
TO_CHAR(last_ddl_time, 'YYYY-MM-DD HH24:MI:SS') as last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'FILE_MANAGER'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
-- 6. Check for compilation errors
PROMPT
PROMPT 6. Checking for compilation errors (FILE_ARCHIVER)...
SELECT
name,
type,
@@ -73,14 +87,31 @@ WHERE owner = 'CT_MRDS'
AND name = 'FILE_ARCHIVER'
ORDER BY type, sequence;
-- 6. Verify package version
-- 7. Check for compilation errors (FILE_MANAGER)
PROMPT
PROMPT 6. Verifying FILE_ARCHIVER version...
SELECT CT_MRDS.FILE_ARCHIVER.GET_VERSION() as package_version FROM DUAL;
PROMPT 7. Checking for compilation errors (FILE_MANAGER)...
SELECT
name,
type,
line,
position,
text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name = 'FILE_MANAGER'
ORDER BY type, sequence;
-- 7. Test trigger validation
-- 8. Verify package versions
PROMPT
PROMPT 7. Testing trigger validation (should fail)...
PROMPT 8. Verifying package versions...
PROMPT FILE_ARCHIVER version:
SELECT CT_MRDS.FILE_ARCHIVER.GET_VERSION() as package_version FROM DUAL;
PROMPT FILE_MANAGER version:
SELECT CT_MRDS.FILE_MANAGER.GET_VERSION() as package_version FROM DUAL;
-- 9. Test trigger validation
PROMPT
PROMPT 9. Testing trigger validation (should fail)...
WHENEVER SQLERROR CONTINUE
SET SERVEROUTPUT ON
DECLARE

View File

@@ -13,7 +13,7 @@
--
-- Configuration by group:
-- - 19 LM tables: MINIMUM_AGE_MONTHS=0 (current month only), 10 files OR 100K rows OR 1GB, 24h stats
-- - 2 CSDB DEBT: MINIMUM_AGE_MONTHS=6, 5 files OR 50K rows OR 512MB, 48h stats
-- - 2 CSDB DEBT: MINIMUM_AGE_MONTHS=0 (current month only), 5 files OR 50K rows OR 512MB, 48h stats
-- - 4 CSDB ratings: MINIMUM_AGE_MONTHS=0 (current month only), 10 files OR 20K rows OR 256MB, 72h stats
--
-- Dependencies:
@@ -33,7 +33,7 @@ PROMPT - Triggers: 10 files OR 100,000 rows OR 1 GB
PROMPT - Stats Expiration: 24 hours
PROMPT
PROMPT CSDB DEBT Tables (2):
PROMPT - Strategy: MINIMUM_AGE_MONTHS = 6
PROMPT - Strategy: MINIMUM_AGE_MONTHS = 0 (current month only)
PROMPT - Triggers: 5 files OR 50,000 rows OR 512 MB
PROMPT - Stats Expiration: 48 hours
PROMPT
@@ -57,12 +57,12 @@ UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS',
MINIMUM_AGE_MONTHS = 0, -- 0 = current month only
ODS_SCHEMA_NAME = 'ODS',
FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 10,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 100000,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 1073741824, -- 1 GB
ARCHIVE_THRESHOLD_FILES_COUNT = 10,
ARCHIVE_THRESHOLD_ROWS_COUNT = 100000,
ARCHIVE_THRESHOLD_BYTES_SUM = 1073741824, -- 1 GB
HOURS_TO_EXPIRE_STATISTICS = 24,
ARCHIVE_ENABLED = 'Y', -- Enable archival for all LM tables
KEEP_IN_TRASH = 'N' -- Delete files immediately after archival (no TRASH retention)
IS_ARCHIVE_ENABLED = 'Y', -- Enable archival for all LM tables
IS_KEEP_IN_TRASH = 'N' -- Delete files immediately after archival (no TRASH retention)
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND A_SOURCE_KEY = 'LM'
AND TABLE_ID IN (
@@ -92,23 +92,23 @@ PROMPT LM tables configuration completed
PROMPT
PROMPT =====================================================================
PROMPT SECTION 2: CSDB DEBT Tables (MINIMUM_AGE_MONTHS = 6)
PROMPT SECTION 2: CSDB DEBT Tables (MINIMUM_AGE_MONTHS = 0)
PROMPT =====================================================================
PROMPT Thresholds: 5 files OR 50K rows OR 512MB
PROMPT Stats expire: 48 hours
PROMPT =====================================================================
-- Update CSDB DEBT tables (6-month retention)
-- Update CSDB DEBT tables (current month only)
UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS',
MINIMUM_AGE_MONTHS = 6,
MINIMUM_AGE_MONTHS = 0,
ODS_SCHEMA_NAME = 'ODS',
FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 5,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 50000,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 536870912, -- 512 MB
ARCHIVE_THRESHOLD_FILES_COUNT = 5,
ARCHIVE_THRESHOLD_ROWS_COUNT = 50000,
ARCHIVE_THRESHOLD_BYTES_SUM = 536870912, -- 512 MB
HOURS_TO_EXPIRE_STATISTICS = 48,
ARCHIVE_ENABLED = 'Y', -- Enable archival for CSDB DEBT tables
KEEP_IN_TRASH = 'N' -- Delete files immediately after archival (no TRASH retention)
IS_ARCHIVE_ENABLED = 'Y', -- Enable archival for CSDB DEBT tables
IS_KEEP_IN_TRASH = 'N' -- Delete files immediately after archival (no TRASH retention)
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND A_SOURCE_KEY = 'CSDB'
AND TABLE_ID IN ('CSDB_DEBT', 'CSDB_DEBT_DAILY');
@@ -129,12 +129,12 @@ UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS',
MINIMUM_AGE_MONTHS = 0, -- 0 = current month only
ODS_SCHEMA_NAME = 'ODS',
FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 10,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 20000,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 268435456, -- 256 MB
ARCHIVE_THRESHOLD_FILES_COUNT = 10,
ARCHIVE_THRESHOLD_ROWS_COUNT = 20000,
ARCHIVE_THRESHOLD_BYTES_SUM = 268435456, -- 256 MB
HOURS_TO_EXPIRE_STATISTICS = 72,
ARCHIVE_ENABLED = 'Y', -- Enable archival for CSDB rating/description tables
KEEP_IN_TRASH = 'N' -- Delete files immediately after archival (no TRASH retention)
IS_ARCHIVE_ENABLED = 'Y', -- Enable archival for CSDB rating/description tables
IS_KEEP_IN_TRASH = 'N' -- Delete files immediately after archival (no TRASH retention)
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND A_SOURCE_KEY = 'CSDB'
AND TABLE_ID IN (
@@ -170,21 +170,21 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THR,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THR,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THR,
ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THR,
ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THR,
ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THR,
HOURS_TO_EXPIRE_STATISTICS AS STATS_HRS,
ARCHIVE_ENABLED,
KEEP_IN_TRASH,
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH,
CASE
WHEN ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS'
AND MINIMUM_AGE_MONTHS = 0
AND FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 10
AND ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 100000
AND BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 1073741824
AND ARCHIVE_THRESHOLD_FILES_COUNT = 10
AND ARCHIVE_THRESHOLD_ROWS_COUNT = 100000
AND ARCHIVE_THRESHOLD_BYTES_SUM = 1073741824
AND HOURS_TO_EXPIRE_STATISTICS = 24
AND ARCHIVE_ENABLED = 'Y'
AND KEEP_IN_TRASH = 'N'
AND IS_ARCHIVE_ENABLED = 'Y'
AND IS_KEEP_IN_TRASH = 'N'
THEN 'OK'
ELSE 'ERROR'
END AS STATUS
@@ -195,28 +195,28 @@ WHERE A_SOURCE_KEY = 'LM'
ORDER BY TABLE_ID;
PROMPT
PROMPT CSDB DEBT Tables (MINIMUM_AGE_MONTHS = 6):
PROMPT CSDB DEBT Tables (MINIMUM_AGE_MONTHS = 0):
PROMPT
SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THR,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THR,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THR,
ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THR,
ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THR,
ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THR,
HOURS_TO_EXPIRE_STATISTICS AS STATS_HRS,
ARCHIVE_ENABLED,
KEEP_IN_TRASH,
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH,
CASE
WHEN ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS'
AND MINIMUM_AGE_MONTHS = 6
AND FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 5
AND ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 50000
AND BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 536870912
AND MINIMUM_AGE_MONTHS = 0
AND ARCHIVE_THRESHOLD_FILES_COUNT = 5
AND ARCHIVE_THRESHOLD_ROWS_COUNT = 50000
AND ARCHIVE_THRESHOLD_BYTES_SUM = 536870912
AND HOURS_TO_EXPIRE_STATISTICS = 48
AND ARCHIVE_ENABLED = 'Y'
AND KEEP_IN_TRASH = 'N'
AND IS_ARCHIVE_ENABLED = 'Y'
AND IS_KEEP_IN_TRASH = 'N'
THEN 'OK'
ELSE 'ERROR'
END AS STATUS
@@ -234,21 +234,21 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THR,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THR,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THR,
ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THR,
ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THR,
ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THR,
HOURS_TO_EXPIRE_STATISTICS AS STATS_HRS,
ARCHIVE_ENABLED,
KEEP_IN_TRASH,
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH,
CASE
WHEN ARCHIVAL_STRATEGY = 'MINIMUM_AGE_MONTHS'
AND MINIMUM_AGE_MONTHS = 0
AND FILES_COUNT_OVER_ARCHIVE_THRESHOLD = 10
AND ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = 20000
AND BYTES_SUM_OVER_ARCHIVE_THRESHOLD = 268435456
AND ARCHIVE_THRESHOLD_FILES_COUNT = 10
AND ARCHIVE_THRESHOLD_ROWS_COUNT = 20000
AND ARCHIVE_THRESHOLD_BYTES_SUM = 268435456
AND HOURS_TO_EXPIRE_STATISTICS = 72
AND ARCHIVE_ENABLED = 'Y'
AND KEEP_IN_TRASH = 'N'
AND IS_ARCHIVE_ENABLED = 'Y'
AND IS_KEEP_IN_TRASH = 'N'
THEN 'OK'
ELSE 'ERROR'
END AS STATUS
@@ -267,12 +267,12 @@ SELECT
COUNT(*) AS TOTAL_CONFIGURED,
SUM(CASE WHEN MINIMUM_AGE_MONTHS = 0 THEN 1 ELSE 0 END) AS CURRENT_MONTH_ONLY,
SUM(CASE WHEN MINIMUM_AGE_MONTHS > 0 THEN 1 ELSE 0 END) AS MULTI_MONTH_RETENTION,
SUM(CASE WHEN FILES_COUNT_OVER_ARCHIVE_THRESHOLD IS NOT NULL THEN 1 ELSE 0 END) AS WITH_FILE_THRESHOLD,
SUM(CASE WHEN ROWS_COUNT_OVER_ARCHIVE_THRESHOLD IS NOT NULL THEN 1 ELSE 0 END) AS WITH_ROWS_THRESHOLD,
SUM(CASE WHEN BYTES_SUM_OVER_ARCHIVE_THRESHOLD IS NOT NULL THEN 1 ELSE 0 END) AS WITH_BYTES_THRESHOLD,
SUM(CASE WHEN ARCHIVE_THRESHOLD_FILES_COUNT IS NOT NULL THEN 1 ELSE 0 END) AS WITH_FILE_THRESHOLD,
SUM(CASE WHEN ARCHIVE_THRESHOLD_ROWS_COUNT IS NOT NULL THEN 1 ELSE 0 END) AS WITH_ROWS_THRESHOLD,
SUM(CASE WHEN ARCHIVE_THRESHOLD_BYTES_SUM IS NOT NULL THEN 1 ELSE 0 END) AS WITH_BYTES_THRESHOLD,
SUM(CASE WHEN HOURS_TO_EXPIRE_STATISTICS IS NOT NULL THEN 1 ELSE 0 END) AS WITH_STATS_EXPIRY,
SUM(CASE WHEN ARCHIVE_ENABLED = 'Y' THEN 1 ELSE 0 END) AS ARCHIVAL_ENABLED,
SUM(CASE WHEN KEEP_IN_TRASH = 'N' THEN 1 ELSE 0 END) AS IMMEDIATE_DELETE
SUM(CASE WHEN IS_ARCHIVE_ENABLED = 'Y' THEN 1 ELSE 0 END) AS ARCHIVAL_ENABLED,
SUM(CASE WHEN IS_KEEP_IN_TRASH = 'N' THEN 1 ELSE 0 END) AS IMMEDIATE_DELETE
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND ((A_SOURCE_KEY = 'LM' AND TABLE_ID LIKE 'LM_%')
@@ -306,9 +306,9 @@ SELECT
COUNT(*) AS TABLE_COUNT,
MAX(ARCHIVAL_STRATEGY) AS STRATEGY,
MAX(MINIMUM_AGE_MONTHS) AS MIN_AGE,
MAX(FILES_COUNT_OVER_ARCHIVE_THRESHOLD) AS FILES_THRESHOLD,
MAX(ROWS_COUNT_OVER_ARCHIVE_THRESHOLD) AS ROWS_THRESHOLD,
ROUND(MAX(BYTES_SUM_OVER_ARCHIVE_THRESHOLD)/1048576, 0) || ' MB' AS BYTES_THRESHOLD,
MAX(ARCHIVE_THRESHOLD_FILES_COUNT) AS FILES_THRESHOLD,
MAX(ARCHIVE_THRESHOLD_ROWS_COUNT) AS ROWS_THRESHOLD,
ROUND(MAX(ARCHIVE_THRESHOLD_BYTES_SUM)/1048576, 0) || ' MB' AS BYTES_THRESHOLD,
MAX(HOURS_TO_EXPIRE_STATISTICS) AS STATS_HOURS
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'

View File

@@ -0,0 +1,29 @@
--=============================================================================================================================
-- MARS-828: Install CT_MRDS.FILE_MANAGER Package Specification v3.3.2
--=============================================================================================================================
-- Purpose: Deploy FILE_MANAGER Package Specification with MARS-828 column compatibility
-- Author: Grzegorz Michalski
-- Date: 2026-02-20
-- Related: MARS-828 Threshold Column Rename Compatibility
--=============================================================================================================================
SET SERVEROUTPUT ON
PROMPT ========================================================================
PROMPT Installing CT_MRDS.FILE_MANAGER Package Specification v3.3.2
PROMPT ========================================================================
@@new_version/FILE_MANAGER.pkg
-- Verify package compilation (check specific schema when installing as ADMIN)
SELECT OBJECT_NAME, OBJECT_TYPE, STATUS
FROM ALL_OBJECTS
WHERE OWNER = 'CT_MRDS'
AND OBJECT_NAME = 'FILE_MANAGER'
AND OBJECT_TYPE = 'PACKAGE';
PROMPT SUCCESS: FILE_MANAGER Package Specification v3.3.2 installed
--=============================================================================================================================
-- End of Script
--=============================================================================================================================

View File

@@ -0,0 +1,38 @@
--=============================================================================================================================
-- MARS-828: Install CT_MRDS.FILE_MANAGER Package Body v3.3.2
--=============================================================================================================================
-- Purpose: Deploy FILE_MANAGER Package Body with MARS-828 threshold column compatibility
-- Author: Grzegorz Michalski
-- Date: 2026-02-20
-- Related: MARS-828 Threshold Column Rename Compatibility
--=============================================================================================================================
SET SERVEROUTPUT ON
PROMPT ========================================================================
PROMPT Installing CT_MRDS.FILE_MANAGER Package Body v3.3.2
PROMPT ========================================================================
@@new_version/FILE_MANAGER.pkb
-- Verify package compilation (check specific schema when installing as ADMIN)
SELECT OBJECT_NAME, OBJECT_TYPE, STATUS
FROM ALL_OBJECTS
WHERE OWNER = 'CT_MRDS'
AND OBJECT_NAME = 'FILE_MANAGER'
AND OBJECT_TYPE IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY OBJECT_TYPE;
-- Check for any compilation errors
SELECT 'COMPILATION ERRORS FOUND' AS WARNING
FROM ALL_ERRORS
WHERE OWNER = 'CT_MRDS'
AND NAME = 'FILE_MANAGER'
AND TYPE = 'PACKAGE BODY'
AND ROWNUM = 1;
PROMPT SUCCESS: FILE_MANAGER Package Body v3.3.2 installed
--=============================================================================================================================
-- End of Script
--=============================================================================================================================

View File

@@ -1,7 +1,7 @@
-- MARS-828: Rollback archival strategy columns
-- Author: Grzegorz Michalski
-- Date: 2026-01-27
-- Description: Remove ARCHIVAL_STRATEGY, MINIMUM_AGE_MONTHS, ARCHIVE_ENABLED, and KEEP_IN_TRASH columns
-- Description: Remove ARCHIVAL_STRATEGY, MINIMUM_AGE_MONTHS, IS_ARCHIVE_ENABLED, and IS_KEEP_IN_TRASH columns
PROMPT ========================================
PROMPT MARS-828: Removing archival strategy and config columns
@@ -12,17 +12,20 @@ ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
DROP CONSTRAINT CHK_ARCHIVAL_STRATEGY;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
DROP CONSTRAINT CHK_ARCHIVE_ENABLED;
DROP CONSTRAINT CHK_IS_ARCHIVE_ENABLED;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
DROP CONSTRAINT CHK_KEEP_IN_TRASH;
DROP CONSTRAINT CHK_IS_KEEP_IN_TRASH;
-- Drop columns
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP (
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
ARCHIVE_ENABLED,
KEEP_IN_TRASH
MINIMUM_AGE_MONTHS
);
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP (
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH
);
-- Verify columns dropped
@@ -31,7 +34,7 @@ SELECT
FROM all_tab_columns
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_CONFIG'
AND column_name IN ('ARCHIVAL_STRATEGY', 'MINIMUM_AGE_MONTHS', 'ARCHIVE_ENABLED', 'KEEP_IN_TRASH');
AND column_name IN ('ARCHIVAL_STRATEGY', 'MINIMUM_AGE_MONTHS', 'IS_ARCHIVE_ENABLED', 'IS_KEEP_IN_TRASH');
PROMPT ========================================
PROMPT Archival strategy and config columns removed successfully

View File

@@ -0,0 +1,47 @@
-- MARS-828: Rollback threshold column renames
-- Author: Grzegorz Michalski
-- Date: 2026-01-28
-- Description: Reverts threshold columns back to original naming
PROMPT ========================================
PROMPT MARS-828: Rolling back threshold column renames
PROMPT ========================================
-- Revert threshold columns to original names
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN ARCHIVE_THRESHOLD_DAYS TO DAYS_FOR_ARCHIVE_THRESHOLD;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN ARCHIVE_THRESHOLD_FILES_COUNT TO FILES_COUNT_OVER_ARCHIVE_THRESHOLD;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN ARCHIVE_THRESHOLD_BYTES_SUM TO BYTES_SUM_OVER_ARCHIVE_THRESHOLD;
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG
RENAME COLUMN ARCHIVE_THRESHOLD_ROWS_COUNT TO ROWS_COUNT_OVER_ARCHIVE_THRESHOLD;
-- Verify rollback
PROMPT ========================================
PROMPT Verifying threshold column rollback...
PROMPT ========================================
SELECT
column_name,
data_type,
data_length
FROM all_tab_columns
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_CONFIG'
AND (column_name LIKE '%ARCHIVE_THRESHOLD%' OR column_name LIKE 'DAYS_FOR%')
ORDER BY column_id;
PROMPT ========================================
PROMPT Expected original columns:
PROMPT DAYS_FOR_ARCHIVE_THRESHOLD
PROMPT FILES_COUNT_OVER_ARCHIVE_THRESHOLD
PROMPT BYTES_SUM_OVER_ARCHIVE_THRESHOLD
PROMPT ROWS_COUNT_OVER_ARCHIVE_THRESHOLD
PROMPT ========================================
PROMPT Threshold column renames rolled back successfully
PROMPT ========================================

View File

@@ -0,0 +1,84 @@
-- =====================================================================
-- Script: 94b_MARS_828_rollback_column_comments.sql
-- MARS Issue: MARS-828
-- Author: Grzegorz Michalski
-- Date: 2026-02-20
-- Purpose: Remove column comments added by 01b_MARS_828_add_column_comments.sql
-- Description: Optional rollback - removes documentation but does not affect functionality
-- =====================================================================
PROMPT ========================================
PROMPT MARS-828: Removing column comments (optional)
PROMPT ========================================
-- =====================================================================
-- Remove A_SOURCE_FILE_CONFIG Column Comments
-- =====================================================================
PROMPT Removing column comments from A_SOURCE_FILE_CONFIG...
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS '';
-- =====================================================================
-- Remove A_SOURCE_FILE_RECEIVED Column Comments
-- =====================================================================
PROMPT Removing column comments from A_SOURCE_FILE_RECEIVED...
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CHECKSUM IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CREATED IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.BYTES IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_YEAR IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_MONTH IS '';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.ARCH_PATH IS '';
-- =====================================================================
-- Verification
-- =====================================================================
PROMPT
PROMPT Verifying column comments removed...
PROMPT
SELECT
table_name,
COUNT(*) as total_columns,
COUNT(CASE WHEN comments IS NOT NULL AND LENGTH(comments) > 0 THEN 1 END) as documented_columns
FROM all_col_comments
WHERE owner = 'CT_MRDS'
AND table_name IN ('A_SOURCE_FILE_CONFIG', 'A_SOURCE_FILE_RECEIVED')
GROUP BY table_name
ORDER BY table_name;
PROMPT
PROMPT ========================================
PROMPT Column comments removed successfully
PROMPT ========================================
PROMPT NOTE: This is an optional rollback step
PROMPT Database functionality is not affected
PROMPT ========================================

View File

@@ -10,9 +10,9 @@
-- archival parameters back to NULL (unconfigured state):
-- - ARCHIVAL_STRATEGY
-- - MINIMUM_AGE_MONTHS
-- - FILES_COUNT_OVER_ARCHIVE_THRESHOLD
-- - ROWS_COUNT_OVER_ARCHIVE_THRESHOLD
-- - BYTES_SUM_OVER_ARCHIVE_THRESHOLD
-- - ARCHIVE_THRESHOLD_FILES_COUNT
-- - ARCHIVE_THRESHOLD_ROWS_COUNT
-- - ARCHIVE_THRESHOLD_BYTES_SUM
-- - HOURS_TO_EXPIRE_STATISTICS
--
-- This script reverts changes made by:
@@ -47,9 +47,9 @@ UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = NULL,
MINIMUM_AGE_MONTHS = NULL,
ODS_SCHEMA_NAME = NULL,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD = NULL,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = NULL,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = NULL,
ARCHIVE_THRESHOLD_FILES_COUNT = NULL,
ARCHIVE_THRESHOLD_ROWS_COUNT = NULL,
ARCHIVE_THRESHOLD_BYTES_SUM = NULL,
HOURS_TO_EXPIRE_STATISTICS = NULL
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND A_SOURCE_KEY = 'LM'
@@ -88,9 +88,9 @@ UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = NULL,
MINIMUM_AGE_MONTHS = NULL,
ODS_SCHEMA_NAME = NULL,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD = NULL,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = NULL,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = NULL,
ARCHIVE_THRESHOLD_FILES_COUNT = NULL,
ARCHIVE_THRESHOLD_ROWS_COUNT = NULL,
ARCHIVE_THRESHOLD_BYTES_SUM = NULL,
HOURS_TO_EXPIRE_STATISTICS = NULL
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND A_SOURCE_KEY = 'CSDB'
@@ -109,9 +109,9 @@ UPDATE CT_MRDS.A_SOURCE_FILE_CONFIG
SET ARCHIVAL_STRATEGY = NULL,
MINIMUM_AGE_MONTHS = NULL,
ODS_SCHEMA_NAME = NULL,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD = NULL,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD = NULL,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD = NULL,
ARCHIVE_THRESHOLD_FILES_COUNT = NULL,
ARCHIVE_THRESHOLD_ROWS_COUNT = NULL,
ARCHIVE_THRESHOLD_BYTES_SUM = NULL,
HOURS_TO_EXPIRE_STATISTICS = NULL
WHERE SOURCE_FILE_TYPE = 'INPUT'
AND A_SOURCE_KEY = 'CSDB'
@@ -148,16 +148,16 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THR,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THR,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THR,
ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THR,
ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THR,
ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THR,
HOURS_TO_EXPIRE_STATISTICS AS STATS_HRS,
CASE
WHEN ARCHIVAL_STRATEGY IS NULL
AND MINIMUM_AGE_MONTHS IS NULL
AND FILES_COUNT_OVER_ARCHIVE_THRESHOLD IS NULL
AND ROWS_COUNT_OVER_ARCHIVE_THRESHOLD IS NULL
AND BYTES_SUM_OVER_ARCHIVE_THRESHOLD IS NULL
AND ARCHIVE_THRESHOLD_FILES_COUNT IS NULL
AND ARCHIVE_THRESHOLD_ROWS_COUNT IS NULL
AND ARCHIVE_THRESHOLD_BYTES_SUM IS NULL
AND HOURS_TO_EXPIRE_STATISTICS IS NULL
THEN 'OK'
ELSE 'ERROR - Still configured'
@@ -176,16 +176,16 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THR,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THR,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THR,
ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THR,
ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THR,
ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THR,
HOURS_TO_EXPIRE_STATISTICS AS STATS_HRS,
CASE
WHEN ARCHIVAL_STRATEGY IS NULL
AND MINIMUM_AGE_MONTHS IS NULL
AND FILES_COUNT_OVER_ARCHIVE_THRESHOLD IS NULL
AND ROWS_COUNT_OVER_ARCHIVE_THRESHOLD IS NULL
AND BYTES_SUM_OVER_ARCHIVE_THRESHOLD IS NULL
AND ARCHIVE_THRESHOLD_FILES_COUNT IS NULL
AND ARCHIVE_THRESHOLD_ROWS_COUNT IS NULL
AND ARCHIVE_THRESHOLD_BYTES_SUM IS NULL
AND HOURS_TO_EXPIRE_STATISTICS IS NULL
THEN 'OK'
ELSE 'ERROR - Still configured'
@@ -204,16 +204,16 @@ SELECT
TABLE_ID,
ARCHIVAL_STRATEGY,
MINIMUM_AGE_MONTHS,
FILES_COUNT_OVER_ARCHIVE_THRESHOLD AS FILE_THR,
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD AS ROW_THR,
BYTES_SUM_OVER_ARCHIVE_THRESHOLD AS BYTE_THR,
ARCHIVE_THRESHOLD_FILES_COUNT AS FILE_THR,
ARCHIVE_THRESHOLD_ROWS_COUNT AS ROW_THR,
ARCHIVE_THRESHOLD_BYTES_SUM AS BYTE_THR,
HOURS_TO_EXPIRE_STATISTICS AS STATS_HRS,
CASE
WHEN ARCHIVAL_STRATEGY IS NULL
AND MINIMUM_AGE_MONTHS IS NULL
AND FILES_COUNT_OVER_ARCHIVE_THRESHOLD IS NULL
AND ROWS_COUNT_OVER_ARCHIVE_THRESHOLD IS NULL
AND BYTES_SUM_OVER_ARCHIVE_THRESHOLD IS NULL
AND ARCHIVE_THRESHOLD_FILES_COUNT IS NULL
AND ARCHIVE_THRESHOLD_ROWS_COUNT IS NULL
AND ARCHIVE_THRESHOLD_BYTES_SUM IS NULL
AND HOURS_TO_EXPIRE_STATISTICS IS NULL
THEN 'OK'
ELSE 'ERROR - Still configured'

View File

@@ -0,0 +1,10 @@
-- ===================================================================
-- MARS-828: Rollback FILE_MANAGER Package Specification to v3.3.1
-- ===================================================================
-- Purpose: Restore previous package specification version (pre-threshold column rename compatibility)
-- Author: Grzegorz Michalski
-- Date: 2026-02-20
-- WARNING: This removes MARS-828 threshold column compatibility from FILE_MANAGER
-- ===================================================================
@@rollback_version/FILE_MANAGER.pkg

View File

@@ -0,0 +1,10 @@
-- ===================================================================
-- MARS-828: Rollback FILE_MANAGER Package Body to v3.3.1
-- ===================================================================
-- Purpose: Restore previous package body version (pre-threshold column rename compatibility)
-- Author: Grzegorz Michalski
-- Date: 2026-02-20
-- WARNING: This removes MARS-828 threshold column compatibility from FILE_MANAGER
-- ===================================================================
@@rollback_version/FILE_MANAGER.pkb

View File

@@ -35,10 +35,10 @@ PROMPT
PROMPT ============================================================================
PROMPT MARS-828 Installation Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_ARCHIVER
PROMPT Change: Enhanced archival strategies (MINIMUM_AGE_MONTHS, HYBRID) + TRASH retention + Selective archiving
PROMPT Package: CT_MRDS.FILE_ARCHIVER v3.3.0 + CT_MRDS.FILE_MANAGER v3.3.2
PROMPT Change: Enhanced archival strategies (MINIMUM_AGE_MONTHS, HYBRID) + TRASH retention + Selective archiving + FILE_MANAGER compatibility
PROMPT Purpose: Flexible archival policies per data source with file retention and config-based control
PROMPT Steps: 10 (DDL, Trigger, Statuses, Grants, Package v3.3.0, Verify, Track, Configure)
PROMPT Steps: 14 (DDL, Rename, Comments, Trigger, Statuses, Grants, Packages, Verify, Track, Configure)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_start FROM DUAL;
PROMPT ============================================================================
@@ -56,52 +56,72 @@ WHENEVER SQLERROR CONTINUE
-- Installation steps
PROMPT
PROMPT Step 1/9: Adding archival strategy and config columns to A_SOURCE_FILE_CONFIG
PROMPT =============================================================================
PROMPT Step 1/12: Adding archival strategy and config columns to A_SOURCE_FILE_CONFIG
PROMPT ==============================================================================
@@01_MARS_828_install_add_archival_strategy_columns.sql
PROMPT
PROMPT Step 2/9: Creating validation trigger
PROMPT Step 2/12: Renaming threshold columns for consistent naming
PROMPT ==========================================================
@@01a_MARS_828_rename_threshold_columns.sql
PROMPT
PROMPT Step 3/12: Adding comprehensive column comments
PROMPT ===============================================
@@01b_MARS_828_add_column_comments.sql
PROMPT
PROMPT Step 4/12: Creating validation trigger
PROMPT ======================================
@@02_MARS_828_install_archival_strategy_trigger.sql
PROMPT
PROMPT Step 3/10: Adding TRASH retention statuses to A_SOURCE_FILE_RECEIVED
PROMPT =====================================================================
PROMPT Step 5/12: Adding TRASH retention statuses to A_SOURCE_FILE_RECEIVED
PROMPT ===================================================================
@@07_MARS_828_install_add_trash_retention_statuses.sql
PROMPT
PROMPT Step 4/10: Granting privileges on T_FILENAME to MRDS_LOADER
PROMPT ============================================================
PROMPT Step 6/12: Granting privileges on T_FILENAME to MRDS_LOADER
PROMPT ==========================================================
@@08_MARS_828_install_grant_t_filename.sql
PROMPT
PROMPT Step 5/10: Deploying FILE_ARCHIVER Package Specification v3.3.0
PROMPT ================================================================
PROMPT Step 7/12: Deploying FILE_ARCHIVER Package Specification v3.3.0
PROMPT ==============================================================
@@03_MARS_828_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT Step 6/10: Deploying FILE_ARCHIVER Package Body v3.3.0
PROMPT ======================================================
PROMPT Step 8/14: Deploying FILE_ARCHIVER Package Body v3.3.0
PROMPT ====================================================
@@04_MARS_828_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT Step 7/10: Verifying installation
PROMPT =================================
PROMPT Step 9/14: Deploying FILE_MANAGER Package Specification v3.3.2
PROMPT =============================================================
@@09_MARS_828_install_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT Step 10/14: Deploying FILE_MANAGER Package Body v3.3.2
PROMPT ===================================================
@@10_MARS_828_install_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT Step 11/14: Verifying installation
PROMPT ==================================
@@05_MARS_828_verify_installation.sql
PROMPT
PROMPT Step 8/10: Tracking package versions
PROMPT ====================================
PROMPT Step 12/14: Tracking package versions
PROMPT =====================================
@@track_package_versions.sql
PROMPT
PROMPT Step 9/10: Verifying tracked packages
PROMPT =====================================
PROMPT Step 13/14: Verifying tracked packages
PROMPT ======================================
@@verify_packages_version.sql
PROMPT
PROMPT Step 10/10: Configuring Release 01 tables archival strategies
PROMPT Step 14/14: Configuring Release 01 tables archival strategies
PROMPT ============================================================
@@06_MARS_828_configure_release01_tables.sql
@@ -113,12 +133,13 @@ PROMPT Completion Time:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_end FROM DUAL;
PROMPT
PROMPT Installation Summary:
PROMPT - Package: CT_MRDS.FILE_ARCHIVER
PROMPT - Version: 3.3.0 (includes selective archiving and config-based TRASH policy)
PROMPT - Packages Installed:
PROMPT * CT_MRDS.FILE_ARCHIVER v3.3.0 (includes selective archiving and config-based TRASH policy)
PROMPT * CT_MRDS.FILE_MANAGER v3.3.2 (compatible with MARS-828 threshold column renames)
PROMPT - Strategies: THRESHOLD_BASED (default), MINIMUM_AGE_MONTHS (0=current month), HYBRID
PROMPT - Selective Archiving: ARCHIVE_ENABLED column (Y=archive, N=skip)
PROMPT - TRASH Policy: KEEP_IN_TRASH column (Y=keep files, N=delete immediately)
PROMPT * Default: ARCHIVE_ENABLED='Y', KEEP_IN_TRASH='N' (archiving enabled, immediate deletion)
PROMPT - Selective Archiving: IS_ARCHIVE_ENABLED column (Y=archive, N=skip)
PROMPT - TRASH Policy: IS_KEEP_IN_TRASH column (Y=keep files, N=delete immediately)
PROMPT * Default: IS_ARCHIVE_ENABLED='Y', IS_KEEP_IN_TRASH='N' (archiving enabled, immediate deletion)
PROMPT * TRASH is a subfolder in DATA bucket (e.g., TRASH/LM/TABLE_NAME)
PROMPT * No more pKeepInTrash parameter - policy from config only
PROMPT - New Procedure: ARCHIVE_ALL_FOR_SOURCE(pSourceKey) for batch processing

View File

@@ -16,20 +16,20 @@ CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
DAYS_FOR_ARCHIVE_THRESHOLD NUMBER(4,0),
FILES_COUNT_OVER_ARCHIVE_THRESHOLD NUMBER(38,0),
BYTES_SUM_OVER_ARCHIVE_THRESHOLD NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ROWS_COUNT_OVER_ARCHIVE_THRESHOLD NUMBER(38,0),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
KEEP_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEEP_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_ARCHIVE_ENABLED CHECK (ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_KEEP_IN_TRASH CHECK (KEEP_IN_TRASH IN ('Y', 'N')),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEEP_IN_TRASH CHECK (IS_KEEP_IN_TRASH IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
@@ -47,10 +47,64 @@ ON "CT_MRDS"."A_SOURCE_FILE_CONFIG" ("SOURCE_FILE_TYPE", "SOURCE_FILE_ID", "TABL
TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS 'Archival strategy: THRESHOLD_BASED, CURRENT_MONTH_ONLY, MINIMUM_AGE_MONTHS, HYBRID. Added in MARS-828';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS 'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS strategy). Added in MARS-828';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS 'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2). Added in MARS-1049';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_ENABLED IS 'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process. Added in MARS-828 v3.3.0';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.KEEP_IN_TRASH IS 'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy. Added in MARS-828 v3.3.0';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -26,4 +26,41 @@ CREATE TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED (
CREATE UNIQUE INDEX CT_MRDS.A_SOURCE_FILE_RECEIVED_UK1
ON CT_MRDS.A_SOURCE_FILE_RECEIVED(CHECKSUM, CREATED, BYTES);
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY IS
'Primary key - unique identifier for received file record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY IS
'Foreign key to A_SOURCE_FILE_CONFIG - links file to its configuration and processing rules';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME IS
'Full object name/path of the received file in OCI Object Storage (includes INBOX prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CHECKSUM IS
'MD5 checksum of file content for integrity verification and duplicate detection';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CREATED IS
'Timestamp with timezone when file was created/uploaded to Object Storage (from DBMS_CLOUD.LIST_OBJECTS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.BYTES IS
'File size in bytes';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE IS
'Date when file was registered in the system (extracted from CREATED timestamp)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS IS
'Current processing status: RECEIVED → VALIDATED → READY_FOR_INGESTION → INGESTED → ARCHIVED_AND_TRASHED → ARCHIVED_AND_PURGED';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME IS
'Name of temporary external table created for file validation (dropped after validation)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_YEAR IS
'Year partition value (YYYY format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_MONTH IS
'Month partition value (MM format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.ARCH_FILE_NAME IS
'Archive directory prefix in ARCHIVE bucket containing archived Parquet files (supports multiple files from parallel DBMS_CLOUD.EXPORT_DATA)';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_RECEIVED TO MRDS_LOADER_ROLE;

View File

@@ -21,7 +21,7 @@ AS
CASE pSourceFileConfig.ARCHIVAL_STRATEGY
-- Legacy threshold-based strategy (backward compatible)
WHEN 'THRESHOLD_BASED' THEN
vWhereClause := 'extract(day from (systimestamp - workflow_start)) > ' || pSourceFileConfig.DAYS_FOR_ARCHIVE_THRESHOLD;
vWhereClause := 'extract(day from (systimestamp - workflow_start)) > ' || pSourceFileConfig.ARCHIVE_THRESHOLD_DAYS;
-- Archive data older than X months (0 = current month only)
WHEN 'MINIMUM_AGE_MONTHS' THEN
@@ -113,15 +113,15 @@ AS
vSourceFileConfig := CT_MRDS.FILE_MANAGER.GET_SOURCE_FILE_CONFIG(pSourceFileConfigKey => pSourceFileConfigKey);
-- Check if archiving is enabled for this configuration
IF vSourceFileConfig.ARCHIVE_ENABLED = 'N' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archiving disabled for this configuration (ARCHIVE_ENABLED=N). Skipping.', 'WARNING', vParameters);
IF vSourceFileConfig.IS_ARCHIVE_ENABLED = 'N' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archiving disabled for this configuration (IS_ARCHIVE_ENABLED=N). Skipping.', 'WARNING', vParameters);
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('End','INFO',vParameters);
RETURN;
END IF;
-- Get TRASH policy from configuration
vKeepInTrash := (vSourceFileConfig.KEEP_IN_TRASH = 'Y');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('TRASH policy from config: KEEP_IN_TRASH=' || vSourceFileConfig.KEEP_IN_TRASH, 'INFO', vParameters);
vKeepInTrash := (vSourceFileConfig.IS_KEEP_IN_TRASH = 'Y');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('TRASH policy from config: IS_KEEP_IN_TRASH=' || vSourceFileConfig.IS_KEEP_IN_TRASH, 'INFO', vParameters);
vTableStat := GET_TABLE_STAT(pSourceFileConfigKey => pSourceFileConfigKey);
@@ -142,9 +142,9 @@ AS
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archival strategy: MINIMUM_AGE_MONTHS (threshold-independent)','INFO');
ELSE
-- THRESHOLD_BASED and HYBRID: Check thresholds
if vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.FILES_COUNT_OVER_ARCHIVE_THRESHOLD then vArchivalTriggeredBy := 'FILES_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ROWS_COUNT_OVER_ARCHIVE_THRESHOLD then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.BYTES_SUM_OVER_ARCHIVE_THRESHOLD then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
if vTableStat.OVER_ARCH_THRESOLD_FILE_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_FILES_COUNT then vArchivalTriggeredBy := 'FILES_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_ROW_COUNT >= vSourceFileConfig.ARCHIVE_THRESHOLD_ROWS_COUNT then vArchivalTriggeredBy := vArchivalTriggeredBy||', ROWS_COUNT';
elsif vTableStat.OVER_ARCH_THRESOLD_SIZE >= vSourceFileConfig.ARCHIVE_THRESHOLD_BYTES_SUM then vArchivalTriggeredBy := vArchivalTriggeredBy||', BYTES_SUM';
else CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Non of archival triggers reached','INFO');
end if;
END IF;
@@ -166,6 +166,7 @@ AS
join CT_MRDS.a_workflow_history h
on s.a_workflow_history_key = h.a_workflow_history_key
where ' || GET_ARCHIVAL_WHERE_CLAUSE(vSourceFileConfig) || '
and h.WORKFLOW_SUCCESSFUL = ''Y''
group by file$name, file$path, to_char(h.workflow_start,''yyyy''), to_char(h.workflow_start,''mm'')'
;
@@ -182,11 +183,11 @@ AS
join CT_MRDS.A_SOURCE_FILE_RECEIVED r
on s.file$name = r.source_file_name
and r.a_source_file_config_key = '||pSourceFileConfigKey||'
and r.PROCESSING_STATUS = ''INGESTED''
join CT_MRDS.a_workflow_history h
on s.a_workflow_history_key = h.a_workflow_history_key
and to_char(h.workflow_start,''yyyy'') = '''||ym_loop.year||'''
and to_char(h.workflow_start,''mm'') = '''||ym_loop.month||'''
and h.WORKFLOW_SUCCESSFUL = ''Y''
'
;
vUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE')||'ARCHIVE/'||vSourceFileConfig.A_SOURCE_KEY||'/'||vSourceFileConfig.TABLE_ID||'/PARTITION_YEAR='||ym_loop.year||'/PARTITION_MONTH='||ym_loop.month||'/';
@@ -296,10 +297,10 @@ AS
AND r.source_file_name = f.filename
AND r.PROCESSING_STATUS = 'ARCHIVED_AND_TRASHED';
END LOOP;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('All archived files removed from TRASH folder and marked as ARCHIVED_AND_PURGED (config: KEEP_IN_TRASH=N).','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('All archived files removed from TRASH folder and marked as ARCHIVED_AND_PURGED (config: IS_KEEP_IN_TRASH=N).','INFO');
ELSE
-- Keep files in TRASH folder (status remains ARCHIVED_AND_TRASHED)
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archived files kept in TRASH folder for retention (config: KEEP_IN_TRASH=Y, status: ARCHIVED_AND_TRASHED).','INFO');
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Archived files kept in TRASH folder for retention (config: IS_KEEP_IN_TRASH=Y, status: ARCHIVED_AND_TRASHED).','INFO');
END IF;
--ROLLBACK PART
@@ -483,7 +484,7 @@ AS
,sum(case when ' || vWhereClause || ' then row_count_per_file else 0 end) as OLD_ROW_COUNT
,sum(r.bytes) as BYTES
,sum(case when ' || vWhereClause || ' then r.bytes else 0 end) as OLD_BYTES
,'||COALESCE(TO_CHAR(vSourceFileConfig.DAYS_FOR_ARCHIVE_THRESHOLD), 'NULL')||' as DAYS_FOR_ARCHIVE_THRESHOLD
,'||COALESCE(TO_CHAR(vSourceFileConfig.ARCHIVE_THRESHOLD_DAYS), 'NULL')||' as ARCHIVE_THRESHOLD_DAYS
,systimestamp as CREATED
from tmp_gr t
join (SELECT * from DBMS_CLOUD.LIST_OBJECTS(
@@ -1041,8 +1042,8 @@ AS
SELECT
A_SOURCE_FILE_CONFIG_KEY,
TABLE_ID,
ARCHIVE_ENABLED,
KEEP_IN_TRASH,
IS_ARCHIVE_ENABLED,
IS_KEEP_IN_TRASH,
A_SOURCE_KEY
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
@@ -1058,16 +1059,16 @@ AS
)
ORDER BY A_SOURCE_KEY, A_SOURCE_FILE_CONFIG_KEY
) LOOP
IF config_rec.ARCHIVE_ENABLED = 'N' THEN
IF config_rec.IS_ARCHIVE_ENABLED = 'N' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'Skipping table ' || config_rec.TABLE_ID || ' (ARCHIVE_ENABLED=N) [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ']',
'Skipping table ' || config_rec.TABLE_ID || ' (IS_ARCHIVE_ENABLED=N) [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ']',
'INFO'
);
vTablesSkipped := vTablesSkipped + 1;
ELSE
BEGIN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'Archiving table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', KEEP_IN_TRASH=' || config_rec.KEEP_IN_TRASH || ']',
'Archiving table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', IS_KEEP_IN_TRASH=' || config_rec.IS_KEEP_IN_TRASH || ']',
'INFO'
);
@@ -1174,14 +1175,14 @@ AS
END IF;
-- Set enabled filter info
vEnabledFilter := CASE WHEN pOnlyEnabled THEN 'ARCHIVE_ENABLED=Y only' ELSE 'All tables' END;
vEnabledFilter := CASE WHEN pOnlyEnabled THEN 'IS_ARCHIVE_ENABLED=Y only' ELSE 'All tables' END;
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT('Filter mode: ' || vEnabledFilter, 'INFO');
FOR config_rec IN (
SELECT
A_SOURCE_FILE_CONFIG_KEY,
TABLE_ID,
ARCHIVE_ENABLED,
IS_ARCHIVE_ENABLED,
A_SOURCE_KEY
FROM CT_MRDS.A_SOURCE_FILE_CONFIG
WHERE SOURCE_FILE_TYPE = 'INPUT'
@@ -1195,20 +1196,20 @@ AS
-- Level 3: All configs when pGatherAll = TRUE
(pSourceFileConfigKey IS NULL AND pSourceKey IS NULL AND pGatherAll = TRUE)
)
-- Apply ARCHIVE_ENABLED filter if pOnlyEnabled = TRUE
AND (pOnlyEnabled = FALSE OR ARCHIVE_ENABLED = 'Y')
-- Apply IS_ARCHIVE_ENABLED filter if pOnlyEnabled = TRUE
AND (pOnlyEnabled = FALSE OR IS_ARCHIVE_ENABLED = 'Y')
ORDER BY A_SOURCE_KEY, A_SOURCE_FILE_CONFIG_KEY
) LOOP
IF pOnlyEnabled AND config_rec.ARCHIVE_ENABLED = 'N' THEN
IF pOnlyEnabled AND config_rec.IS_ARCHIVE_ENABLED = 'N' THEN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'Skipping table ' || config_rec.TABLE_ID || ' (ARCHIVE_ENABLED=N) [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ']',
'Skipping table ' || config_rec.TABLE_ID || ' (IS_ARCHIVE_ENABLED=N) [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ']',
'INFO'
);
vTablesSkipped := vTablesSkipped + 1;
ELSE
BEGIN
CT_MRDS.ENV_MANAGER.LOG_PROCESS_EVENT(
'Gathering statistics for table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', ARCHIVE_ENABLED=' || config_rec.ARCHIVE_ENABLED || ']',
'Gathering statistics for table ' || config_rec.TABLE_ID || ' [Source: ' || config_rec.A_SOURCE_KEY || ', Config: ' || config_rec.A_SOURCE_FILE_CONFIG_KEY || ', IS_ARCHIVE_ENABLED=' || config_rec.IS_ARCHIVE_ENABLED || ']',
'INFO'
);

View File

@@ -23,7 +23,7 @@ AS
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.0 (2026-02-11): Added ARCHIVE_ENABLED and KEEP_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEEP_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.2.1 (2026-02-10): Fixed status update - ARCHIVED → ARCHIVED_AND_TRASHED when moving files to TRASH folder (critical bug fix)' || CHR(13)||CHR(10) ||
'3.2.0 (2026-02-06): Added pKeepInTrash parameter (DEFAULT TRUE) to ARCHIVE_TABLE_DATA for TRASH folder retention control - files kept in TRASH subfolder (DATA bucket) by default for safety and compliance' || CHR(13)||CHR(10) ||
'3.1.2 (2026-02-06): Fixed missing PARTITION_YEAR/PARTITION_MONTH assignments in UPDATE statement and export query circular dependency (now filters by workflow_start instead of partition fields)' || CHR(13)||CHR(10) ||
@@ -51,7 +51,7 @@ AS
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data from table specified by pSourceFileConfigKey(A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY) into PARQUET file on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
**/
PROCEDURE ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
@@ -62,7 +62,7 @@ AS
* @desc Function wrapper for ARCHIVE_TABLE_DATA procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_TABLE_DATA procedure and captures execution result.
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_TABLE_DATA(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
@@ -96,16 +96,16 @@ AS
/**
* @name GATHER_TABLE_STAT_ALL
* @desc Multi-level batch statistics gathering procedure with three granularity levels.
* Processes configurations based on ARCHIVE_ENABLED setting (when pOnlyEnabled=TRUE).
* Processes configurations based on IS_ARCHIVE_ENABLED setting (when pOnlyEnabled=TRUE).
* Gathers statistics for external tables and inserts data into A_TABLE_STAT and A_TABLE_STAT_HIST.
* @param pSourceFileConfigKey - (LEVEL 1) Gather stats for specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Gather stats for all tables in source system (e.g., 'LM', 'C2D') (medium priority)
* @param pGatherAll - (LEVEL 3) When TRUE, gather stats for ALL tables across all sources (lowest priority)
* @param pOnlyEnabled - When TRUE (default), only process tables with ARCHIVE_ENABLED='Y'
* @param pOnlyEnabled - When TRUE (default), only process tables with IS_ARCHIVE_ENABLED='Y'
* @example -- Level 1: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pSourceFileConfigKey => 123);
* @example -- Level 2: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pSourceKey => 'LM');
* @example -- Level 3: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE);
* @example -- All tables regardless of ARCHIVE_ENABLED: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE, pOnlyEnabled => FALSE);
* @example -- All tables regardless of IS_ARCHIVE_ENABLED: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE, pOnlyEnabled => FALSE);
**/
PROCEDURE GATHER_TABLE_STAT_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
@@ -122,7 +122,7 @@ AS
* @param pSourceFileConfigKey - (LEVEL 1) Gather stats for specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Gather stats for all tables in source system (medium priority)
* @param pGatherAll - (LEVEL 3) When TRUE, gather stats for ALL tables across all sources (lowest priority)
* @param pOnlyEnabled - When TRUE (default), only process tables with ARCHIVE_ENABLED='Y'
* @param pOnlyEnabled - When TRUE (default), only process tables with IS_ARCHIVE_ENABLED='Y'
* @example SELECT FILE_ARCHIVER.FN_GATHER_TABLE_STAT_ALL(pSourceKey => 'LM') FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
@@ -136,8 +136,8 @@ AS
/**
* @name ARCHIVE_ALL
* @desc Multi-level batch archival procedure with three granularity levels.
* Only processes configurations where ARCHIVE_ENABLED='Y'.
* TRASH policy for each table is controlled by individual KEEP_IN_TRASH column.
* Only processes configurations where IS_ARCHIVE_ENABLED='Y'.
* TRASH policy for each table is controlled by individual IS_KEEP_IN_TRASH column.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (e.g., 'LM', 'C2D') (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,639 @@
create or replace PACKAGE CT_MRDS.FILE_MANAGER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select FILE_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.5.1';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-24 13:35:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.5.1 (2026-02-24): Fixed TIMESTAMP field syntax in GENERATE_EXTERNAL_TABLE_PARAMS for SQL*Loader compatibility (CHAR(35) DATE_FORMAT TIMESTAMP MASK format)' || CHR(13)||CHR(10) ||
'3.3.2 (2026-02-20): MARS-828 - Fixed threshold column names in GET_DET_SOURCE_FILE_CONFIG_INFO for MARS-828 compatibility' || CHR(13)||CHR(10) ||
'3.3.1 (2025-11-27): MARS-1046 - Fixed ISO 8601 datetime format parsing with milliseconds and timezone (e.g., 2012-03-02T14:16:23.798+01:00)' || CHR(13)||CHR(10) ||
'3.3.0 (2025-11-26): MARS-1056 - Fixed VARCHAR2 definitions in GENERATE_EXTERNAL_TABLE_PARAMS to preserve CHAR/BYTE semantics from template tables' || CHR(13)||CHR(10) ||
'3.2.1 (2025-11-24): MARS-1049 - Added pEncoding parameter support for CSV character set specification' || CHR(13)||CHR(10) ||
'3.2.0 (2025-10-22): Added package versioning system using centralized ENV_MANAGER functions' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-20): Enhanced PROCESS_SOURCE_FILE with 6-step validation workflow' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-15): Separated export procedures into dedicated DATA_EXPORTER package' || CHR(13)||CHR(10) ||
'2.5.0 (2025-10-10): Added DELETE_SOURCE_CASCADE for safe configuration removal' || CHR(13)||CHR(10) ||
'2.0.0 (2025-09-25): Added official path patterns support (INBOX 3-level, ODS 2-level, ARCHIVE 2-level)' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-01): Initial release with file processing and validation capabilities';
TYPE tSourceFileReceived IS RECORD
(
A_SOURCE_FILE_RECEIVED_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE,
A_SOURCE_FILE_CONFIG_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY%TYPE,
SOURCE_FILE_PREFIX_INBOX VARCHAR2(430),
SOURCE_FILE_PREFIX_ODS VARCHAR2(430),
SOURCE_FILE_PREFIX_QUARANTINE VARCHAR2(430),
SOURCE_FILE_PREFIX_ARCHIVE VARCHAR2(430),
SOURCE_FILE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME%TYPE,
RECEPTION_DATE CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE%TYPE,
PROCESSING_STATUS CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS%TYPE,
EXTERNAL_TABLE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME%TYPE
);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgSourceFileConfigKey PLS_INTEGER;
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_SOURCE_FILE_CONFIG
* @desc Get source file type by matching the source file name against source file type naming patterns
* or by specifying the id of a received source file.
* @example ...
* @ex_rslt "CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE"
**/
FUNCTION GET_SOURCE_FILE_CONFIG(pFileUri IN VARCHAR2 DEFAULT NULL
, pSourceFileReceivedKey IN NUMBER DEFAULT NULL
, pSourceFileConfigKey IN NUMBER DEFAULT NULL)
RETURN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a newly received source file in A_SOURCE_FILE_RECEIVED table.
* This overload automatically determines source file type from the file name.
* It returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2
)
RETURN PLS_INTEGER;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a new new source file in A_SOURCE_FILE_RECEIVED table based on pSourceFileReceivedName and pSourceFileConfig.
* Then it returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(
* pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv'
* ,pSourceFileConfig => ...A_SOURCE_FILE_CONFIG%ROWTYPE... );
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2,
pSourceFileConfig IN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE
)
RETURN PLS_INTEGER;
/**
* @name SET_SOURCE_FILE_RECEIVED_STATUS
* @desc Set status of file in A_SOURCE_FILE_RECEIVED table - PROCESSING_STATUS column
* based on A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY
* and provided value of pStatus parameter
* @example exec FILE_MANAGER.SET_SOURCE_FILE_RECEIVED_STATUS(pSourceFileReceivedKey => 377, pStatus => 'READY_FOR_INGESTION');
**/
PROCEDURE SET_SOURCE_FILE_RECEIVED_STATUS(
pSourceFileReceivedKey IN PLS_INTEGER,
pStatus IN VARCHAR2
);
/**
* @name GET_EXTERNAL_TABLE_COLUMNS
* @desc Function used to get string with all table columns definitions based on pTargetTableTemplate "TEMPLATE TABLE" name.
* It used for creating "EXTERNAL TABLE" using CREATE_EXTERNAL_TABLE procedure.
* @example select FILE_MANAGER.GET_EXTERNAL_TABLE_COLUMNS(pTargetTableTemplate => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER') from dual;
* @ex_rslt "A_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "A_WORKFLOW_HISTORY_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "REV_NUMBER" NUMBER(28,0),
* "REF_DATE" DATE,
* "FREE_TEXT" VARCHAR2(1000 CHAR),
* "MLF_BS_TOTAL" NUMBER(28,10),
* "DF_BS_TOTAL" NUMBER(28,10),
* "MLF_SF_TOTAL" NUMBER(28,10),
* "DF_SF_TOTAL" NUMBER(28,10)
**/
FUNCTION GET_EXTERNAL_TABLE_COLUMNS (
pTargetTableTemplate IN VARCHAR2
)
RETURN CLOB;
/**
* @name CREATE_EXTERNAL_TABLE
* @desc A wrapper procedure for DBMS_CLOUD.CREATE_EXTERNAL_TABLE which creates External Table
* MARS-1049: Added pEncoding parameter for CSV character set specification
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252')
* If provided, adds CHARACTERSET clause to external table definition
* @example
* begin
* FILE_MANAGER.CREATE_EXTERNAL_TABLE(
* pTableName => 'STANDING_FACILITIES_HEADER',
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER',
* pPrefix => 'ODS/LM/STANDING_FACILITIES_HEADER/',
* pBucketUri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/',
* pFileName => NULL,
* pDelimiter => ',',
* pEncoding => 'UTF8'
* );
* end;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pTableName IN VARCHAR2,
pTemplateTableName IN VARCHAR2,
pPrefix IN VARCHAR2,
pBucketUri IN VARCHAR2 DEFAULT ENV_MANAGER.gvInboxBucketUri,
pFileName IN VARCHAR2 DEFAULT NULL,
pDelimiter IN VARCHAR2 DEFAULT ',',
pEncoding IN VARCHAR2 DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name CREATE_EXTERNAL_TABLE
* @desc Creates External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.CREATE_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_SOURCE_FILE_RECEIVED
* @desc A wrapper procedure for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
* It validate External table build upon single file
* provided by pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.VALIDATE_SOURCE_FILE_RECEIVED(pSourceFileReceivedKey => 377);
**/
PROCEDURE VALIDATE_SOURCE_FILE_RECEIVED
(
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_EXTERNAL_TABLE
* @desc A wrapper function for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE.
* It validates External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt FAILED
**/
FUNCTION VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name S_VALIDATE_EXTERNAL_TABLE
* @desc A function which checks if SELECT query reterns any rows.
* It trys to selects External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.S_VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt PASSED
**/
FUNCTION S_VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name DROP_EXTERNAL_TABLE
* @desc It drops External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.DROP_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);
**/
PROCEDURE DROP_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name COPY_FILE
* @desc It copies file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS'
* @example exec FILE_MANAGER.COPY_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE COPY_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name MOVE_FILE
* @desc It moves file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS', 'QUARANTINE'
* @example exec FILE_MANAGER.MOVE_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE MOVE_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name DELETE_FOLDER_CONTENTS
* @desc It deletes all files from specified folder in the cloud storage.
* The procedure lists all objects in the specified folder prefix and deletes them one by one.
* pBucketArea parameter specifies which bucket to use: 'INBOX', 'DATA', 'ARCHIVE'
* pFolderPrefix parameter specifies the folder path within the bucket (e.g., 'C2D/UC_DISSEM/UC_NMA_DISSEM/')
* @example exec FILE_MANAGER.DELETE_FOLDER_CONTENTS(pBucketArea => 'INBOX', pFolderPrefix => 'C2D/UC_DISSEM/UC_NMA_DISSEM/');
**/
PROCEDURE DELETE_FOLDER_CONTENTS(
pBucketArea IN VARCHAR2,
pFolderPrefix IN VARCHAR2
);
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter.
* Ubmrella procedure that calls:
* - REGISTER_SOURCE_FILE_RECEIVED;
* - CREATE_EXTERNAL_TABLE;
* - VALIDATE_SOURCE_FILE_RECEIVED;
* - DROP_EXTERNAL_TABLE;
* - MOVE_FILE;
* @example exec FILE_MANAGER.PROCESS_SOURCE_FILE(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
**/
PROCEDURE PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
;
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter and return processing result value.
* It returns (success/failure) => 0 / -(value).
* Ubmrella function that calls PROCESS_SOURCE_FILE procedure.
* @example
* declare
* vResult PLS_INTEGER;
* begin
* vResult := CT_MRDS.FILE_MANAGER.PROCESS_SOURCE_FILE(PSOURCEFILERECEIVEDNAME => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* DBMS_OUTPUT.PUT_LINE('vResult = ' || vResult);
* end;
* @ex_rslt 0
* -20021
**/
FUNCTION PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
RETURN PLS_INTEGER;
/**
* @name GET_DATE_FORMAT
* @desc Returns date format for specified template table name and column name.
* Date is taken from configuration A_COLUMN_DATE_FORMAT table.
* @example select FILE_MANAGER.GET_DATE_FORMAT(
* pTemplateTableName => 'STANDING_FACILITIES_HEADER',
* pColumnName => 'SNAPSHOT_DATE')
* from dual;
* @ex_rslt DD/MM/YYYY HH24:MI:SS
**/
FUNCTION GET_DATE_FORMAT(
pTemplateTableName IN VARCHAR2,
pColumnName IN VARCHAR2
) RETURN VARCHAR2;
/**
* @name GENERATE_EXTERNAL_TABLE_PARAMS
* @desc It builds two strings: pColumnList and pFieldList for specified Template Table name, by parameter: pTemplateTableName.
* @example
* declare
* vColumnList CLOB;
* vFieldList CLOB;
* begin
* FILE_MANAGER.GENERATE_EXTERNAL_TABLE_PARAMS (
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER'
* ,pColumnList => vColumnList
* ,pFieldList => vFieldList
* );
* DBMS_OUTPUT.PUT_LINE('vColumnList = '||vColumnList);
* DBMS_OUTPUT.PUT_LINE('vFieldList = '||vFieldList);
* end;
* /
**/
PROCEDURE GENERATE_EXTERNAL_TABLE_PARAMS (
pTemplateTableName IN VARCHAR2,
pColumnList OUT CLOB,
pFieldList OUT CLOB
);
/**
* @name ADD_SOURCE
* @desc Insert a new record to A_SOURCE table.
* pSourceKey is a PRIMARY KEY value.
**/
PROCEDURE ADD_SOURCE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE,
pSourceName IN CT_MRDS.A_SOURCE.SOURCE_NAME%TYPE
);
/**
* @name DELETE_SOURCE_CASCADE
* @desc Safely deletes a SOURCE specified by pSourceKey parameter from A_SOURCE table and all dependent tables:
* - A_SOURCE_FILE_CONFIG
* - A_SOURCE_FILE_RECEIVED
* - A_COLUMN_DATE_FORMAT (only if template table is not shared with other source systems)
* The procedure checks if template tables are shared before deleting date format configurations.
* If a template table is used by multiple source systems, date formats are preserved.
* @example CALL CT_MRDS.FILE_MANAGER.DELETE_SOURCE_CASCADE(pSourceKey => 'TEST_SYS');
**/
PROCEDURE DELETE_SOURCE_CASCADE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE
);
/**
* @name GET_CONTAINER_SOURCE_FILE_CONFIG_KEY
* @desc For specified parameter pSourceFileId (A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID)
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY for related CONTAINER record.
* @example select FILE_MANAGER.GET_CONTAINER_SOURCE_FILE_CONFIG_KEY(
* pSourceFileId => 'UC_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_CONTAINER_SOURCE_FILE_CONFIG_KEY (
pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name GET_SOURCE_FILE_CONFIG_KEY
* @desc For specified input parameters,
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY.
* @example select FILE_MANAGER.GET_SOURCE_FILE_CONFIG_KEY (
* pSourceFileType => 'INPUT'
* ,pSourceFileId => 'UC_DISSEM'
* ,pTableId => 'UC_NMA_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_SOURCE_FILE_CONFIG_KEY (
pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE DEFAULT 'INPUT'
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name ADD_SOURCE_FILE_CONFIG
* @desc Insert a new record to A_SOURCE_FILE_CONFIG table.
* MARS-1049: Added pEncoding parameter for CSV character set specification.
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252', 'EE8ISO8859P2')
* If NULL, no CHARACTERSET clause is added to external table definitions
* @example CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
* pSourceKey => 'C2D', pSourceFileType => 'INPUT',
* pSourceFileId => 'UC_DISSEM', pTableId => 'METADATA_LOADS',
* pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
* pEncoding => 'UTF8'
* );
**/
PROCEDURE ADD_SOURCE_FILE_CONFIG (
pSourceKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY%TYPE
,pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pSourceFileDesc IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC%TYPE
,pSourceFileNamePattern IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name ADD_COLUMN_DATE_FORMAT
* @desc Insert a new record to A_COLUMN_DATE_FORMAT table.
**/
PROCEDURE ADD_COLUMN_DATE_FORMAT (
pTemplateTableName IN CT_MRDS.A_COLUMN_DATE_FORMAT.TEMPLATE_TABLE_NAME%TYPE
,pColumnName IN CT_MRDS.A_COLUMN_DATE_FORMAT.COLUMN_NAME%TYPE
,pDateFormat IN CT_MRDS.A_COLUMN_DATE_FORMAT.DATE_FORMAT%TYPE
);
/**
* @name GET_BUCKET_URI
* @desc Function used to get string with bucket http url.
* Possible input values for pBucketArea are: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example select FILE_MANAGER.GET_BUCKET_URI(pBucketArea => 'ODS') from dual;
* @ex_rslt https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/
**/
FUNCTION GET_BUCKET_URI(pBucketArea VARCHAR2)
RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_CONFIG_INFO
* @desc Function returns details about A_SOURCE_FILE_CONFIG record
* for specified pSourceFileConfigKey (A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY).
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_CONFIG_INFO (
* pSourceFileConfigKey => 128
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
* @ex_rslt
* Details about File Configuration:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 128
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Details about related Container Config:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 126
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Column Date Format config entries:
* --------------------------------
* TEMPLATE_TABLE_NAME = CT_ET_TEMPLATES.C2D_UC_MA_DISSEM
* ...
* --------------------------------
**/
FUNCTION GET_DET_SOURCE_FILE_CONFIG_INFO (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_RECEIVED_INFO
* @desc Function returns details about A_SOURCE_FILE_RECEIVED record
* for specified pSourceFileReceivedKey (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY).
* If pIncludeConfigInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_RECEIVED_INFO (
* pSourceFileReceivedKey => 377
* ,pIncludeConfigInfo => 1
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
*
**/
FUNCTION GET_DET_SOURCE_FILE_RECEIVED_INFO (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE
,pIncludeConfigInfo IN PLS_INTEGER DEFAULT 1
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_USER_LOAD_OPERATIONS
* @desc Function returns details from USER_LOAD_OPERATIONS table
* for specified pOperationId.
* @example select FILE_MANAGER.GET_DET_USER_LOAD_OPERATIONS (pOperationId => 3608) from dual;
* @ex_rslt
* Details about USER_LOAD_OPERATIONS where ID = 3608
* --------------------------------
* ID = 3608
* TYPE = VALIDATE
* SID = 31260
* SERIAL# = 52915
* START_TIME = 2025-05-20 10.08.24.436983 EUROPE/BELGRADE
* UPDATE_TIME = 2025-05-20 10.08.24.458643 EUROPE/BELGRADE
* STATUS = FAILED
* OWNER_NAME = CT_MRDS
* TABLE_NAME = STANDING_FACILITIES_HEADER
* PARTITION_NAME =
* SUBPARTITION_NAME =
* FILE_URI_LIST =
* ROWS_LOADED =
* LOGFILE_TABLE = VALIDATE$3608_LOG
* BADFILE_TABLE = VALIDATE$3608_BAD
* STATUS_TABLE =
* TEMPEXT_TABLE =
* CREDENTIAL_NAME =
* EXPIRATION_TIME = 2025-05-22 10.08.24.436983000 EUROPE/BELGRADE
* --------------------------------
**/
FUNCTION GET_DET_USER_LOAD_OPERATIONS (
pOperationId PLS_INTEGER
) RETURN VARCHAR2;
/**
* @name ANALYZE_VALIDATION_ERRORS
* @desc Wrapper function that analyzes validation errors for a source file using its received key.
* Automatically derives template schema, table name, CSV URI and validation log table
* from file metadata and calls ENV_MANAGER.ANALYZE_VALIDATION_ERRORS.
* @example SELECT FILE_MANAGER.ANALYZE_VALIDATION_ERRORS(63) FROM DUAL;
* @ex_rslt Detailed validation analysis report with column mismatches and solutions
**/
FUNCTION ANALYZE_VALIDATION_ERRORS(
pSourceFileReceivedKey IN NUMBER
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the FILE_MANAGER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT FILE_MANAGER.GET_VERSION() FROM DUAL;
* @ex_rslt 3.2.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Uses centralized ENV_MANAGER.GET_PACKAGE_VERSION_INFO function.
* @example SELECT FILE_MANAGER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: FILE_MANAGER
* Version: 3.2.0
* Build Date: 2025-10-22 16:30:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Uses centralized ENV_MANAGER.FORMAT_VERSION_HISTORY function.
* @example SELECT FILE_MANAGER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt FILE_MANAGER Version History:
* 3.2.0 (2025-10-22): Added package versioning system...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/
/

View File

@@ -35,9 +35,12 @@ PROMPT Rollback steps:
PROMPT 1. Rollback TRASH retention statuses
PROMPT 2. Revoke T_FILENAME privileges
PROMPT 3. Remove validation trigger
PROMPT 4. Drop all configuration columns (ARCHIVAL_STRATEGY, MINIMUM_AGE_MONTHS, ARCHIVE_ENABLED, KEEP_IN_TRASH)
PROMPT 5. Restore FILE_ARCHIVER package to v2.0.0
PROMPT 6. Revert all archival strategies to THRESHOLD_BASED
PROMPT 4. Remove column comments (OPTIONAL - does not affect functionality)
PROMPT 5. Revert threshold column renames (restore original naming)
PROMPT 6. Drop all configuration columns (ARCHIVAL_STRATEGY, MINIMUM_AGE_MONTHS, IS_ARCHIVE_ENABLED, IS_KEEP_IN_TRASH)
PROMPT 7. Restore FILE_ARCHIVER package to v2.0.0
PROMPT 8. Restore FILE_MANAGER package to v3.3.1
PROMPT 9. Revert all archival strategies to THRESHOLD_BASED
PROMPT
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_start FROM DUAL;
@@ -56,38 +59,61 @@ WHENEVER SQLERROR CONTINUE
-- Rollback steps (in reverse order)
PROMPT
PROMPT Step 1/7: Rolling back TRASH retention statuses
PROMPT Step 1/9: Rolling back TRASH retention statuses
PROMPT ================================================
@@90_MARS_828_rollback_trash_retention_statuses.sql
PROMPT
PROMPT Step 2/7: Revoking T_FILENAME privileges from MRDS_LOADER
PROMPT Step 2/9: Revoking T_FILENAME privileges from MRDS_LOADER
PROMPT ==========================================================
@@95_MARS_828_rollback_grant_t_filename.sql
PROMPT
PROMPT Step 3/7: Dropping validation trigger
PROMPT Step 3/9: Dropping validation trigger
PROMPT ======================================
@@93_MARS_828_rollback_trigger.sql
PROMPT
PROMPT Step 4/7: Dropping all archival configuration columns
PROMPT Step 4/9 (OPTIONAL): Removing column comments
PROMPT ==============================================
PROMPT NOTE: This is optional - comments do not affect functionality
PROMPT Skipping column comments removal in standard rollback
PROMPT Execute 94b_MARS_828_rollback_column_comments.sql manually if needed
PROMPT
PROMPT
PROMPT Step 5/9: Reverting threshold column renames
PROMPT =============================================
@@94a_MARS_828_rollback_threshold_rename.sql
PROMPT
PROMPT Step 6/9: Dropping all archival configuration columns
PROMPT ======================================================
@@94_MARS_828_rollback_columns.sql
PROMPT
PROMPT Step 5/7: Restoring FILE_ARCHIVER Package Specification v2.0.0
PROMPT Step 7/9: Restoring FILE_ARCHIVER Package Specification v2.0.0
PROMPT ===============================================================
@@91_MARS_828_rollback_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT Step 6/7: Restoring FILE_ARCHIVER Package Body v2.0.0
PROMPT ======================================================
PROMPT Step 8/11: Restoring FILE_ARCHIVER Package Body v2.0.0
PROMPT =======================================================
@@92_MARS_828_rollback_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT Step 7/7: Verifying tracked packages
PROMPT =====================================
PROMPT Step 9/11: Restoring FILE_MANAGER Package Specification v3.3.1
PROMPT ===============================================================
@@97_MARS_828_rollback_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT Step 10/11: Restoring FILE_MANAGER Package Body v3.3.1
PROMPT ======================================================
@@98_MARS_828_rollback_FILE_MANAGER_BODY.sql
PROMPT
PROMPT Step 11/11: Verifying tracked packages
PROMPT ======================================
@@verify_packages_version.sql
-- Verify rollback
@@ -97,9 +123,9 @@ PROMPT =========================================
SELECT object_name, object_type, status, last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'FILE_ARCHIVER'
AND object_name IN ('FILE_ARCHIVER', 'FILE_MANAGER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
ORDER BY object_name, object_type;
PROMPT
PROMPT ============================================================================
@@ -109,8 +135,9 @@ PROMPT Completion Time:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_end FROM DUAL;
PROMPT
PROMPT Rollback Summary:
PROMPT - Package: CT_MRDS.FILE_ARCHIVER
PROMPT - Restored Version: 2.0.0 (THRESHOLD_BASED archival only)
PROMPT - Packages Rolled Back:
PROMPT * CT_MRDS.FILE_ARCHIVER to v2.0.0 (THRESHOLD_BASED archival only)
PROMPT * CT_MRDS.FILE_MANAGER to v3.3.1 (pre-MARS-828 threshold column compatibility)
PROMPT - Removed Features: CURRENT_MONTH_ONLY, MINIMUM_AGE_MONTHS, HYBRID strategies
PROMPT
PROMPT Log file: &_filename

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,637 @@
create or replace PACKAGE CT_MRDS.FILE_MANAGER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select FILE_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.3.1';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2025-11-27 14:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.1 (2025-11-27): MARS-1046 - Fixed ISO 8601 datetime format parsing with milliseconds and timezone (e.g., 2012-03-02T14:16:23.798+01:00)' || CHR(13)||CHR(10) ||
'3.3.0 (2025-11-26): MARS-1056 - Fixed VARCHAR2 definitions in GENERATE_EXTERNAL_TABLE_PARAMS to preserve CHAR/BYTE semantics from template tables' || CHR(13)||CHR(10) ||
'3.2.1 (2025-11-24): MARS-1049 - Added pEncoding parameter support for CSV character set specification' || CHR(13)||CHR(10) ||
'3.2.0 (2025-10-22): Added package versioning system using centralized ENV_MANAGER functions' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-20): Enhanced PROCESS_SOURCE_FILE with 6-step validation workflow' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-15): Separated export procedures into dedicated DATA_EXPORTER package' || CHR(13)||CHR(10) ||
'2.5.0 (2025-10-10): Added DELETE_SOURCE_CASCADE for safe configuration removal' || CHR(13)||CHR(10) ||
'2.0.0 (2025-09-25): Added official path patterns support (INBOX 3-level, ODS 2-level, ARCHIVE 2-level)' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-01): Initial release with file processing and validation capabilities';
TYPE tSourceFileReceived IS RECORD
(
A_SOURCE_FILE_RECEIVED_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE,
A_SOURCE_FILE_CONFIG_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY%TYPE,
SOURCE_FILE_PREFIX_INBOX VARCHAR2(430),
SOURCE_FILE_PREFIX_ODS VARCHAR2(430),
SOURCE_FILE_PREFIX_QUARANTINE VARCHAR2(430),
SOURCE_FILE_PREFIX_ARCHIVE VARCHAR2(430),
SOURCE_FILE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME%TYPE,
RECEPTION_DATE CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE%TYPE,
PROCESSING_STATUS CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS%TYPE,
EXTERNAL_TABLE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME%TYPE
);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgSourceFileConfigKey PLS_INTEGER;
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_SOURCE_FILE_CONFIG
* @desc Get source file type by matching the source file name against source file type naming patterns
* or by specifying the id of a received source file.
* @example ...
* @ex_rslt "CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE"
**/
FUNCTION GET_SOURCE_FILE_CONFIG(pFileUri IN VARCHAR2 DEFAULT NULL
, pSourceFileReceivedKey IN NUMBER DEFAULT NULL
, pSourceFileConfigKey IN NUMBER DEFAULT NULL)
RETURN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a newly received source file in A_SOURCE_FILE_RECEIVED table.
* This overload automatically determines source file type from the file name.
* It returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2
)
RETURN PLS_INTEGER;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a new new source file in A_SOURCE_FILE_RECEIVED table based on pSourceFileReceivedName and pSourceFileConfig.
* Then it returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(
* pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv'
* ,pSourceFileConfig => ...A_SOURCE_FILE_CONFIG%ROWTYPE... );
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2,
pSourceFileConfig IN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE
)
RETURN PLS_INTEGER;
/**
* @name SET_SOURCE_FILE_RECEIVED_STATUS
* @desc Set status of file in A_SOURCE_FILE_RECEIVED table - PROCESSING_STATUS column
* based on A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY
* and provided value of pStatus parameter
* @example exec FILE_MANAGER.SET_SOURCE_FILE_RECEIVED_STATUS(pSourceFileReceivedKey => 377, pStatus => 'READY_FOR_INGESTION');
**/
PROCEDURE SET_SOURCE_FILE_RECEIVED_STATUS(
pSourceFileReceivedKey IN PLS_INTEGER,
pStatus IN VARCHAR2
);
/**
* @name GET_EXTERNAL_TABLE_COLUMNS
* @desc Function used to get string with all table columns definitions based on pTargetTableTemplate "TEMPLATE TABLE" name.
* It used for creating "EXTERNAL TABLE" using CREATE_EXTERNAL_TABLE procedure.
* @example select FILE_MANAGER.GET_EXTERNAL_TABLE_COLUMNS(pTargetTableTemplate => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER') from dual;
* @ex_rslt "A_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "A_WORKFLOW_HISTORY_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "REV_NUMBER" NUMBER(28,0),
* "REF_DATE" DATE,
* "FREE_TEXT" VARCHAR2(1000 CHAR),
* "MLF_BS_TOTAL" NUMBER(28,10),
* "DF_BS_TOTAL" NUMBER(28,10),
* "MLF_SF_TOTAL" NUMBER(28,10),
* "DF_SF_TOTAL" NUMBER(28,10)
**/
FUNCTION GET_EXTERNAL_TABLE_COLUMNS (
pTargetTableTemplate IN VARCHAR2
)
RETURN CLOB;
/**
* @name CREATE_EXTERNAL_TABLE
* @desc A wrapper procedure for DBMS_CLOUD.CREATE_EXTERNAL_TABLE which creates External Table
* MARS-1049: Added pEncoding parameter for CSV character set specification
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252')
* If provided, adds CHARACTERSET clause to external table definition
* @example
* begin
* FILE_MANAGER.CREATE_EXTERNAL_TABLE(
* pTableName => 'STANDING_FACILITIES_HEADER',
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER',
* pPrefix => 'ODS/LM/STANDING_FACILITIES_HEADER/',
* pBucketUri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/',
* pFileName => NULL,
* pDelimiter => ',',
* pEncoding => 'UTF8'
* );
* end;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pTableName IN VARCHAR2,
pTemplateTableName IN VARCHAR2,
pPrefix IN VARCHAR2,
pBucketUri IN VARCHAR2 DEFAULT ENV_MANAGER.gvInboxBucketUri,
pFileName IN VARCHAR2 DEFAULT NULL,
pDelimiter IN VARCHAR2 DEFAULT ',',
pEncoding IN VARCHAR2 DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name CREATE_EXTERNAL_TABLE
* @desc Creates External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.CREATE_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_SOURCE_FILE_RECEIVED
* @desc A wrapper procedure for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
* It validate External table build upon single file
* provided by pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.VALIDATE_SOURCE_FILE_RECEIVED(pSourceFileReceivedKey => 377);
**/
PROCEDURE VALIDATE_SOURCE_FILE_RECEIVED
(
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_EXTERNAL_TABLE
* @desc A wrapper function for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE.
* It validates External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt FAILED
**/
FUNCTION VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name S_VALIDATE_EXTERNAL_TABLE
* @desc A function which checks if SELECT query reterns any rows.
* It trys to selects External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.S_VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt PASSED
**/
FUNCTION S_VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name DROP_EXTERNAL_TABLE
* @desc It drops External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.DROP_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);
**/
PROCEDURE DROP_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name COPY_FILE
* @desc It copies file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS'
* @example exec FILE_MANAGER.COPY_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE COPY_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name MOVE_FILE
* @desc It moves file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS', 'QUARANTINE'
* @example exec FILE_MANAGER.MOVE_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE MOVE_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name DELETE_FOLDER_CONTENTS
* @desc It deletes all files from specified folder in the cloud storage.
* The procedure lists all objects in the specified folder prefix and deletes them one by one.
* pBucketArea parameter specifies which bucket to use: 'INBOX', 'DATA', 'ARCHIVE'
* pFolderPrefix parameter specifies the folder path within the bucket (e.g., 'C2D/UC_DISSEM/UC_NMA_DISSEM/')
* @example exec FILE_MANAGER.DELETE_FOLDER_CONTENTS(pBucketArea => 'INBOX', pFolderPrefix => 'C2D/UC_DISSEM/UC_NMA_DISSEM/');
**/
PROCEDURE DELETE_FOLDER_CONTENTS(
pBucketArea IN VARCHAR2,
pFolderPrefix IN VARCHAR2
);
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter.
* Ubmrella procedure that calls:
* - REGISTER_SOURCE_FILE_RECEIVED;
* - CREATE_EXTERNAL_TABLE;
* - VALIDATE_SOURCE_FILE_RECEIVED;
* - DROP_EXTERNAL_TABLE;
* - MOVE_FILE;
* @example exec FILE_MANAGER.PROCESS_SOURCE_FILE(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
**/
PROCEDURE PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
;
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter and return processing result value.
* It returns (success/failure) => 0 / -(value).
* Ubmrella function that calls PROCESS_SOURCE_FILE procedure.
* @example
* declare
* vResult PLS_INTEGER;
* begin
* vResult := CT_MRDS.FILE_MANAGER.PROCESS_SOURCE_FILE(PSOURCEFILERECEIVEDNAME => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* DBMS_OUTPUT.PUT_LINE('vResult = ' || vResult);
* end;
* @ex_rslt 0
* -20021
**/
FUNCTION PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
RETURN PLS_INTEGER;
/**
* @name GET_DATE_FORMAT
* @desc Returns date format for specified template table name and column name.
* Date is taken from configuration A_COLUMN_DATE_FORMAT table.
* @example select FILE_MANAGER.GET_DATE_FORMAT(
* pTemplateTableName => 'STANDING_FACILITIES_HEADER',
* pColumnName => 'SNAPSHOT_DATE')
* from dual;
* @ex_rslt DD/MM/YYYY HH24:MI:SS
**/
FUNCTION GET_DATE_FORMAT(
pTemplateTableName IN VARCHAR2,
pColumnName IN VARCHAR2
) RETURN VARCHAR2;
/**
* @name GENERATE_EXTERNAL_TABLE_PARAMS
* @desc It builds two strings: pColumnList and pFieldList for specified Template Table name, by parameter: pTemplateTableName.
* @example
* declare
* vColumnList CLOB;
* vFieldList CLOB;
* begin
* FILE_MANAGER.GENERATE_EXTERNAL_TABLE_PARAMS (
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER'
* ,pColumnList => vColumnList
* ,pFieldList => vFieldList
* );
* DBMS_OUTPUT.PUT_LINE('vColumnList = '||vColumnList);
* DBMS_OUTPUT.PUT_LINE('vFieldList = '||vFieldList);
* end;
* /
**/
PROCEDURE GENERATE_EXTERNAL_TABLE_PARAMS (
pTemplateTableName IN VARCHAR2,
pColumnList OUT CLOB,
pFieldList OUT CLOB
);
/**
* @name ADD_SOURCE
* @desc Insert a new record to A_SOURCE table.
* pSourceKey is a PRIMARY KEY value.
**/
PROCEDURE ADD_SOURCE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE,
pSourceName IN CT_MRDS.A_SOURCE.SOURCE_NAME%TYPE
);
/**
* @name DELETE_SOURCE_CASCADE
* @desc Safely deletes a SOURCE specified by pSourceKey parameter from A_SOURCE table and all dependent tables:
* - A_SOURCE_FILE_CONFIG
* - A_SOURCE_FILE_RECEIVED
* - A_COLUMN_DATE_FORMAT (only if template table is not shared with other source systems)
* The procedure checks if template tables are shared before deleting date format configurations.
* If a template table is used by multiple source systems, date formats are preserved.
* @example CALL CT_MRDS.FILE_MANAGER.DELETE_SOURCE_CASCADE(pSourceKey => 'TEST_SYS');
**/
PROCEDURE DELETE_SOURCE_CASCADE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE
);
/**
* @name GET_CONTAINER_SOURCE_FILE_CONFIG_KEY
* @desc For specified parameter pSourceFileId (A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID)
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY for related CONTAINER record.
* @example select FILE_MANAGER.GET_CONTAINER_SOURCE_FILE_CONFIG_KEY(
* pSourceFileId => 'UC_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_CONTAINER_SOURCE_FILE_CONFIG_KEY (
pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name GET_SOURCE_FILE_CONFIG_KEY
* @desc For specified input parameters,
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY.
* @example select FILE_MANAGER.GET_SOURCE_FILE_CONFIG_KEY (
* pSourceFileType => 'INPUT'
* ,pSourceFileId => 'UC_DISSEM'
* ,pTableId => 'UC_NMA_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_SOURCE_FILE_CONFIG_KEY (
pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE DEFAULT 'INPUT'
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name ADD_SOURCE_FILE_CONFIG
* @desc Insert a new record to A_SOURCE_FILE_CONFIG table.
* MARS-1049: Added pEncoding parameter for CSV character set specification.
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252', 'EE8ISO8859P2')
* If NULL, no CHARACTERSET clause is added to external table definitions
* @example CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
* pSourceKey => 'C2D', pSourceFileType => 'INPUT',
* pSourceFileId => 'UC_DISSEM', pTableId => 'METADATA_LOADS',
* pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
* pEncoding => 'UTF8'
* );
**/
PROCEDURE ADD_SOURCE_FILE_CONFIG (
pSourceKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY%TYPE
,pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pSourceFileDesc IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC%TYPE
,pSourceFileNamePattern IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name ADD_COLUMN_DATE_FORMAT
* @desc Insert a new record to A_COLUMN_DATE_FORMAT table.
**/
PROCEDURE ADD_COLUMN_DATE_FORMAT (
pTemplateTableName IN CT_MRDS.A_COLUMN_DATE_FORMAT.TEMPLATE_TABLE_NAME%TYPE
,pColumnName IN CT_MRDS.A_COLUMN_DATE_FORMAT.COLUMN_NAME%TYPE
,pDateFormat IN CT_MRDS.A_COLUMN_DATE_FORMAT.DATE_FORMAT%TYPE
);
/**
* @name GET_BUCKET_URI
* @desc Function used to get string with bucket http url.
* Possible input values for pBucketArea are: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example select FILE_MANAGER.GET_BUCKET_URI(pBucketArea => 'ODS') from dual;
* @ex_rslt https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/
**/
FUNCTION GET_BUCKET_URI(pBucketArea VARCHAR2)
RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_CONFIG_INFO
* @desc Function returns details about A_SOURCE_FILE_CONFIG record
* for specified pSourceFileConfigKey (A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY).
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_CONFIG_INFO (
* pSourceFileConfigKey => 128
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
* @ex_rslt
* Details about File Configuration:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 128
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Details about related Container Config:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 126
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Column Date Format config entries:
* --------------------------------
* TEMPLATE_TABLE_NAME = CT_ET_TEMPLATES.C2D_UC_MA_DISSEM
* ...
* --------------------------------
**/
FUNCTION GET_DET_SOURCE_FILE_CONFIG_INFO (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_RECEIVED_INFO
* @desc Function returns details about A_SOURCE_FILE_RECEIVED record
* for specified pSourceFileReceivedKey (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY).
* If pIncludeConfigInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_RECEIVED_INFO (
* pSourceFileReceivedKey => 377
* ,pIncludeConfigInfo => 1
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
*
**/
FUNCTION GET_DET_SOURCE_FILE_RECEIVED_INFO (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE
,pIncludeConfigInfo IN PLS_INTEGER DEFAULT 1
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_USER_LOAD_OPERATIONS
* @desc Function returns details from USER_LOAD_OPERATIONS table
* for specified pOperationId.
* @example select FILE_MANAGER.GET_DET_USER_LOAD_OPERATIONS (pOperationId => 3608) from dual;
* @ex_rslt
* Details about USER_LOAD_OPERATIONS where ID = 3608
* --------------------------------
* ID = 3608
* TYPE = VALIDATE
* SID = 31260
* SERIAL# = 52915
* START_TIME = 2025-05-20 10.08.24.436983 EUROPE/BELGRADE
* UPDATE_TIME = 2025-05-20 10.08.24.458643 EUROPE/BELGRADE
* STATUS = FAILED
* OWNER_NAME = CT_MRDS
* TABLE_NAME = STANDING_FACILITIES_HEADER
* PARTITION_NAME =
* SUBPARTITION_NAME =
* FILE_URI_LIST =
* ROWS_LOADED =
* LOGFILE_TABLE = VALIDATE$3608_LOG
* BADFILE_TABLE = VALIDATE$3608_BAD
* STATUS_TABLE =
* TEMPEXT_TABLE =
* CREDENTIAL_NAME =
* EXPIRATION_TIME = 2025-05-22 10.08.24.436983000 EUROPE/BELGRADE
* --------------------------------
**/
FUNCTION GET_DET_USER_LOAD_OPERATIONS (
pOperationId PLS_INTEGER
) RETURN VARCHAR2;
/**
* @name ANALYZE_VALIDATION_ERRORS
* @desc Wrapper function that analyzes validation errors for a source file using its received key.
* Automatically derives template schema, table name, CSV URI and validation log table
* from file metadata and calls ENV_MANAGER.ANALYZE_VALIDATION_ERRORS.
* @example SELECT FILE_MANAGER.ANALYZE_VALIDATION_ERRORS(63) FROM DUAL;
* @ex_rslt Detailed validation analysis report with column mismatches and solutions
**/
FUNCTION ANALYZE_VALIDATION_ERRORS(
pSourceFileReceivedKey IN NUMBER
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the FILE_MANAGER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT FILE_MANAGER.GET_VERSION() FROM DUAL;
* @ex_rslt 3.2.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Uses centralized ENV_MANAGER.GET_PACKAGE_VERSION_INFO function.
* @example SELECT FILE_MANAGER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: FILE_MANAGER
* Version: 3.2.0
* Build Date: 2025-10-22 16:30:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Uses centralized ENV_MANAGER.FORMAT_VERSION_HISTORY function.
* @example SELECT FILE_MANAGER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt FILE_MANAGER Version History:
* 3.2.0 (2025-10-22): Added package versioning system...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/
/

View File

@@ -29,7 +29,8 @@ DECLARE
-- Format: 'SCHEMA.PACKAGE_NAME'
-- ===================================================================
vPackageList t_string_array := t_string_array(
'CT_MRDS.FILE_ARCHIVER'
'CT_MRDS.FILE_ARCHIVER',
'CT_MRDS.FILE_MANAGER'
);
-- ===================================================================

View File

@@ -26,7 +26,7 @@ END;
/
CREATE TABLE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS (
CHUNK_ID NUMBER PRIMARY KEY,
CHUNK_ID NUMBER NOT NULL,
TASK_NAME VARCHAR2(100) NOT NULL,
YEAR_VALUE VARCHAR2(4) NOT NULL,
MONTH_VALUE VARCHAR2(2) NOT NULL,
@@ -47,14 +47,16 @@ CREATE TABLE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS (
STATUS VARCHAR2(30) DEFAULT 'PENDING' NOT NULL,
ERROR_MESSAGE VARCHAR2(4000),
EXPORT_TIMESTAMP TIMESTAMP,
CREATED_DATE TIMESTAMP DEFAULT SYSTIMESTAMP NOT NULL
CREATED_DATE TIMESTAMP DEFAULT SYSTIMESTAMP NOT NULL,
CONSTRAINT PK_PARALLEL_EXPORT_CHUNKS PRIMARY KEY (TASK_NAME, CHUNK_ID)
);
CREATE INDEX IX_PARALLEL_CHUNKS_TASK ON CT_MRDS.A_PARALLEL_EXPORT_CHUNKS(TASK_NAME);
-- Index for status-based queries (e.g., WHERE STATUS = 'FAILED' AND TASK_NAME = ?)
CREATE INDEX IX_PARALLEL_CHUNKS_STATUS_TASK ON CT_MRDS.A_PARALLEL_EXPORT_CHUNKS(STATUS, TASK_NAME);
COMMENT ON TABLE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS IS 'Permanent table for parallel export chunk processing (DBMS_PARALLEL_EXECUTE) - permanent because GTT data not visible in parallel callback sessions';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.CHUNK_ID IS 'Unique chunk identifier (partition number)';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.TASK_NAME IS 'DBMS_PARALLEL_EXECUTE task name for cleanup';
COMMENT ON TABLE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS IS 'Permanent table for parallel export chunk processing (DBMS_PARALLEL_EXECUTE) - permanent because GTT data not visible in parallel callback sessions. PK: (TASK_NAME, CHUNK_ID) ensures session isolation for concurrent exports.';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.CHUNK_ID IS 'Chunk identifier within task (partition number) - unique per TASK_NAME, not globally';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.TASK_NAME IS 'DBMS_PARALLEL_EXECUTE task name - session isolation key, part of composite PK with CHUNK_ID';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.YEAR_VALUE IS 'Partition year (YYYY)';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.MONTH_VALUE IS 'Partition month (MM)';
COMMENT ON COLUMN CT_MRDS.A_PARALLEL_EXPORT_CHUNKS.SCHEMA_NAME IS 'Schema owning the source table';

View File

@@ -18,34 +18,104 @@ AS
----------------------------------------------------------------------------------------------------
/**
* Deletes export file from OCI bucket if it exists (used for cleanup before retry)
* Silently ignores if file doesn't exist (ORA-20404)
* Deletes ALL files matching specific file pattern before retry export
* Critical for preventing data duplication when DBMS_CLOUD.EXPORT_DATA fails mid-process
*
* Problem: Export fails after creating partial file(s), retry creates new _2, _3 suffixed files
* Solution: Delete ALL files matching the base filename pattern before retry
*
* Pattern matching strategy:
* - Parquet: folder/PARTITION_YEAR=2024/PARTITION_MONTH=11/*.parquet (folder-level safe - each chunk has own partition folder)
* - CSV: folder/TABLENAME_202411*.csv (file-level pattern - multiple chunks share same folder!)
*
* CRITICAL for parallel processing:
* - Parquet chunks are isolated by partition folder structure (safe to delete folder/*)
* - CSV chunks share flat folder structure - MUST use file-specific pattern (TABLENAME_YYYYMM*)
* to avoid deleting files from other parallel chunks in same folder
**/
PROCEDURE DELETE_FAILED_EXPORT_FILE(
pFileUri IN VARCHAR2,
pCredentialName IN VARCHAR2,
pParameters IN VARCHAR2
) IS
vBucketUri VARCHAR2(4000);
vFolderPath VARCHAR2(4000);
vFileName VARCHAR2(1000);
vFileNamePattern VARCHAR2(1000);
vSlashPos NUMBER;
vDotPos NUMBER;
vFilesDeleted NUMBER := 0;
BEGIN
BEGIN
ENV_MANAGER.LOG_PROCESS_EVENT('Attempting to delete potentially corrupted file: ' || pFileUri, 'DEBUG', pParameters);
-- Extract components from URI
-- Example Parquet: https://.../bucket/folder/PARTITION_YEAR=2024/PARTITION_MONTH=11/202411.parquet
-- Example CSV: https://.../bucket/folder/TABLENAME_202411.csv
-- Find last slash before filename
vSlashPos := INSTR(pFileUri, '/', -1);
IF vSlashPos > 0 THEN
-- Extract filename from URI (after last slash)
vFileName := SUBSTR(pFileUri, vSlashPos + 1);
DBMS_CLOUD.DELETE_OBJECT(
credential_name => pCredentialName,
object_uri => pFileUri
);
-- Extract folder path (before last slash)
vFolderPath := SUBSTR(pFileUri, 1, vSlashPos - 1);
ENV_MANAGER.LOG_PROCESS_EVENT('Deleted existing file (cleanup before retry): ' || pFileUri, 'INFO', pParameters);
EXCEPTION
WHEN OTHERS THEN
-- Object not found is OK (file doesn't exist)
IF SQLCODE = -20404 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('File does not exist (OK): ' || pFileUri, 'DEBUG', pParameters);
ELSE
-- Log but don't fail - export will attempt anyway
ENV_MANAGER.LOG_PROCESS_EVENT('Warning: Could not delete file (will retry export anyway): ' || SQLERRM, 'WARNING', pParameters);
END IF;
END;
-- Find bucket URI (protocol + namespace + bucket name)
-- Bucket URI ends after /o/ in OCI Object Storage URLs
vBucketUri := SUBSTR(pFileUri, 1, INSTR(pFileUri, '/o/') + 2);
-- Extract relative folder path (after bucket)
vFolderPath := SUBSTR(vFolderPath, LENGTH(vBucketUri) + 1);
-- Create file pattern by removing extension
-- Oracle adds suffixes BEFORE extension: file.csv -> file_1_timestamp.csv
-- Pattern: file* matches file_1_timestamp.csv, file_2_timestamp.csv
vDotPos := INSTR(vFileName, '.', -1);
IF vDotPos > 0 THEN
vFileNamePattern := SUBSTR(vFileName, 1, vDotPos - 1) || '%';
ELSE
vFileNamePattern := vFileName || '%';
END IF;
ENV_MANAGER.LOG_PROCESS_EVENT('Cleanup before retry - Pattern: ' || vFolderPath || '/' || vFileNamePattern, 'DEBUG', pParameters);
-- List and delete ALL files matching pattern
-- CRITICAL: Uses file-specific pattern for CSV chunk isolation in shared folder
FOR rec IN (
SELECT object_name
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => pCredentialName,
location_uri => vBucketUri
))
WHERE object_name LIKE vFolderPath || '/' || vFileNamePattern
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
credential_name => pCredentialName,
object_uri => vBucketUri || rec.object_name
);
vFilesDeleted := vFilesDeleted + 1;
ENV_MANAGER.LOG_PROCESS_EVENT('Deleted partial file ' || vFilesDeleted || ': ' || rec.object_name, 'DEBUG', pParameters);
EXCEPTION
WHEN OTHERS THEN
-- Log but continue - don't fail entire cleanup
ENV_MANAGER.LOG_PROCESS_EVENT('Warning: Could not delete ' || rec.object_name || ': ' || SQLERRM, 'WARNING', pParameters);
END;
END LOOP;
IF vFilesDeleted > 0 THEN
ENV_MANAGER.LOG_PROCESS_EVENT('Cleanup completed: Deleted ' || vFilesDeleted || ' partial file(s) from previous failed export', 'INFO', pParameters);
ELSE
ENV_MANAGER.LOG_PROCESS_EVENT('No existing files to clean up (pattern match: ' || vFileNamePattern || ')', 'DEBUG', pParameters);
END IF;
ELSE
ENV_MANAGER.LOG_PROCESS_EVENT('Warning: Cannot parse file URI for cleanup: ' || pFileUri, 'WARNING', pParameters);
END IF;
EXCEPTION
WHEN OTHERS THEN
-- Don't fail export if cleanup fails - log and continue
ENV_MANAGER.LOG_PROCESS_EVENT('Warning: Cleanup failed (will retry export anyway): ' || SQLERRM, 'WARNING', pParameters);
END DELETE_FAILED_EXPORT_FILE;
----------------------------------------------------------------------------------------------------
@@ -415,6 +485,8 @@ AS
AND L.LOAD_START >= TO_DATE(' || CHR(39) || TO_CHAR(pMinDate, 'YYYY-MM-DD HH24:MI:SS') || CHR(39) || ', ''YYYY-MM-DD HH24:MI:SS'')
AND L.LOAD_START < TO_DATE(' || CHR(39) || TO_CHAR(pMaxDate, 'YYYY-MM-DD HH24:MI:SS') || CHR(39) || ', ''YYYY-MM-DD HH24:MI:SS'')';
ENV_MANAGER.LOG_PROCESS_EVENT('Processing Year/Month: ' || pYear || '/' || pMonth || ' (Format: '||pFormat||')', 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export query: ' || vQuery, 'DEBUG', pParameters);
-- Construct the URI based on format
IF pFormat = 'PARQUET' THEN
-- Parquet: Use Hive-style partitioning
@@ -425,6 +497,7 @@ AS
'PARTITION_MONTH=' || sanitizeFilename(pMonth) || '/' ||
sanitizeFilename(pYear) || sanitizeFilename(pMonth) || '.parquet';
ENV_MANAGER.LOG_PROCESS_EVENT('Parquet export URI: ' || vUri, 'DEBUG', pParameters);
-- Delete potentially corrupted file from previous failed attempt
@@ -445,6 +518,7 @@ AS
sanitizeFilename(vFileName);
ENV_MANAGER.LOG_PROCESS_EVENT('CSV export URI: ' || vUri, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('CSV maxfilesize: ' || pMaxFileSize || ' bytes (' || ROUND(pMaxFileSize/1048576, 2) || ' MB)', 'DEBUG', pParameters);
-- Delete potentially corrupted file from previous failed attempt
-- This prevents Oracle from creating _1 suffixed files on retry
@@ -472,8 +546,7 @@ AS
RAISE_APPLICATION_ERROR(-20001, 'Unsupported format: ' || pFormat || '. Use PARQUET or CSV.');
END IF;
ENV_MANAGER.LOG_PROCESS_EVENT('Processing Year/Month: ' || pYear || '/' || pMonth || ' (Format: ' || pFormat || ')', 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export query: ' || vQuery, 'DEBUG', pParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Export completed successfully for ' || pYear || '/' || pMonth, 'DEBUG', pParameters);
END EXPORT_SINGLE_PARTITION;
----------------------------------------------------------------------------------------------------
@@ -485,7 +558,8 @@ AS
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
pEndId IN NUMBER,
pTaskName IN VARCHAR2 DEFAULT NULL
) IS
vYear VARCHAR2(4);
vMonth VARCHAR2(2);
@@ -502,9 +576,12 @@ AS
vFileBaseName VARCHAR2(1000);
vMaxFileSize NUMBER;
vJobClass VARCHAR2(128);
vTaskName VARCHAR2(128);
vParameters VARCHAR2(4000);
BEGIN
-- Retrieve chunk context from global temporary table
-- Retrieve chunk context from A_PARALLEL_EXPORT_CHUNKS table
-- CRITICAL: Filter by CHUNK_ID and TASK_NAME for precise session isolation
-- pTaskName parameter passed from RUN_TASK ensures deterministic single-row retrieval
SELECT
YEAR_VALUE,
MONTH_VALUE,
@@ -520,7 +597,8 @@ AS
FORMAT_TYPE,
FILE_BASE_NAME,
MAX_FILE_SIZE,
JOB_CLASS
JOB_CLASS,
TASK_NAME
INTO
vYear,
vMonth,
@@ -536,18 +614,22 @@ AS
vFormat,
vFileBaseName,
vMaxFileSize,
vJobClass
vJobClass,
vTaskName
FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS
WHERE CHUNK_ID = pStartId;
WHERE CHUNK_ID = pStartId
AND TASK_NAME = pTaskName;
vParameters := 'Parallel task - Year: ' || vYear || ', Month: ' || vMonth || ', ChunkID: ' || pStartId;
vParameters := 'Parallel task - Year: ' || vYear || ', Month: ' || vMonth || ', ChunkID: ' || pStartId || ', TaskName: ' || vTaskName;
ENV_MANAGER.LOG_PROCESS_EVENT('Starting parallel export for partition ' || vYear || '/' || vMonth, 'DEBUG', vParameters);
-- Mark chunk as PROCESSING
-- CRITICAL: Use both CHUNK_ID AND TASK_NAME for session isolation
UPDATE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS
SET STATUS = 'PROCESSING',
ERROR_MESSAGE = NULL
WHERE CHUNK_ID = pStartId;
WHERE CHUNK_ID = pStartId
AND TASK_NAME = vTaskName;
COMMIT;
-- Call the worker procedure
@@ -570,26 +652,30 @@ AS
);
-- Mark chunk as COMPLETED
-- CRITICAL: Use both CHUNK_ID AND TASK_NAME for session isolation
UPDATE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS
SET STATUS = 'COMPLETED',
EXPORT_TIMESTAMP = SYSTIMESTAMP,
ERROR_MESSAGE = NULL
WHERE CHUNK_ID = pStartId;
WHERE CHUNK_ID = pStartId
AND TASK_NAME = vTaskName;
COMMIT;
ENV_MANAGER.LOG_PROCESS_EVENT('Completed parallel export for partition ' || vYear || '/' || vMonth, 'DEBUG', vParameters);
EXCEPTION
WHEN OTHERS THEN
-- Capture error details in variable (SQLERRM cannot be used directly in SQL)
vgMsgTmp := 'Parallel task error for partition ' || vYear || '/' || vMonth || ' (ChunkID: ' || pStartId || '): ' || SQLERRM || cgBL || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE;
vgMsgTmp := 'Parallel task error for partition ' || vYear || '/' || vMonth || ' (ChunkID: ' || pStartId || ', TaskName: ' || vTaskName || '): ' || SQLERRM || cgBL || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE;
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
-- Mark chunk as FAILED with error message
-- CRITICAL: Use both CHUNK_ID AND TASK_NAME for session isolation
-- Use vgMsgTmp variable instead of SQLERRM directly (Oracle limitation in SQL context)
UPDATE CT_MRDS.A_PARALLEL_EXPORT_CHUNKS
SET STATUS = 'FAILED',
ERROR_MESSAGE = SUBSTR(vgMsgTmp, 1, 4000)
WHERE CHUNK_ID = pStartId;
WHERE CHUNK_ID = pStartId
AND TASK_NAME = vTaskName;
COMMIT;
RAISE;
@@ -1056,8 +1142,8 @@ AS
-- Populate chunks table (insert new chunks, preserve FAILED chunks for retry)
FOR i IN 1 .. vPartitions.COUNT LOOP
MERGE INTO CT_MRDS.A_PARALLEL_EXPORT_CHUNKS t
USING (SELECT i AS chunk_id, vPartitions(i).year AS yr, vPartitions(i).month AS mn FROM DUAL) s
ON (t.CHUNK_ID = s.chunk_id)
USING (SELECT i AS chunk_id, vTaskName AS task_name, vPartitions(i).year AS yr, vPartitions(i).month AS mn FROM DUAL) s
ON (t.CHUNK_ID = s.chunk_id AND t.TASK_NAME = s.task_name)
WHEN NOT MATCHED THEN
INSERT (CHUNK_ID, TASK_NAME, YEAR_VALUE, MONTH_VALUE, SCHEMA_NAME, TABLE_NAME, KEY_COLUMN_NAME,
BUCKET_URI, FOLDER_NAME, PROCESSED_COLUMNS, MIN_DATE, MAX_DATE,
@@ -1066,33 +1152,34 @@ AS
vBucketUri, pFolderName, vProcessedColumnList, pMinDate, pMaxDate,
pCredentialName, 'PARQUET', NULL, pTemplateTableName, 104857600, pJobClass, 'PENDING')
WHEN MATCHED THEN
UPDATE SET TASK_NAME = vTaskName,
STATUS = CASE WHEN t.STATUS = 'FAILED' THEN 'PENDING' ELSE t.STATUS END,
-- Match found: chunk exists for SAME task (composite PK: TASK_NAME, CHUNK_ID)
-- This handles retry scenario: reset FAILED chunks to PENDING for re-processing
UPDATE SET STATUS = CASE WHEN t.STATUS = 'FAILED' THEN 'PENDING' ELSE t.STATUS END,
ERROR_MESSAGE = CASE WHEN t.STATUS = 'FAILED' THEN NULL ELSE t.ERROR_MESSAGE END;
END LOOP;
COMMIT;
-- Log chunk statistics
-- Log chunk statistics (session-safe: only count chunks for THIS task)
DECLARE
vPendingCount NUMBER;
vFailedCount NUMBER;
BEGIN
SELECT COUNT(*) INTO vPendingCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'PENDING';
SELECT COUNT(*) INTO vFailedCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'FAILED';
SELECT COUNT(*) INTO vPendingCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'PENDING' AND TASK_NAME = vTaskName;
SELECT COUNT(*) INTO vFailedCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'FAILED' AND TASK_NAME = vTaskName;
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics: PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics for task ' || vTaskName || ': PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters);
END;
-- Create parallel task
DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => vTaskName);
-- Define chunks by number range (1 to partition count)
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_NUMBER_COL(
-- Define chunks using SQL query to ensure TASK_NAME isolation
-- CRITICAL: Filter by TASK_NAME to avoid selecting chunks from other concurrent sessions
-- CRITICAL: Use START_ID and END_ID aliases to avoid ORA-00960 ambiguous column naming
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(
task_name => vTaskName,
table_owner => 'CT_MRDS',
table_name => 'A_PARALLEL_EXPORT_CHUNKS',
table_column => 'CHUNK_ID',
chunk_size => 1 -- Each partition is one chunk
sql_stmt => 'SELECT CHUNK_ID AS START_ID, CHUNK_ID AS END_ID FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE TASK_NAME = ''' || vTaskName || ''' ORDER BY CHUNK_ID',
by_rowid => FALSE
);
-- Execute task in parallel
@@ -1101,7 +1188,7 @@ AS
IF pJobClass IS NOT NULL THEN
DBMS_PARALLEL_EXECUTE.RUN_TASK(
task_name => vTaskName,
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id); END;',
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id, ''' || vTaskName || '''); END;',
language_flag => DBMS_SQL.NATIVE,
parallel_level => pParallelDegree,
job_class => pJobClass
@@ -1109,7 +1196,7 @@ AS
ELSE
DBMS_PARALLEL_EXECUTE.RUN_TASK(
task_name => vTaskName,
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id); END;',
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id, ''' || vTaskName || '''); END;',
language_flag => DBMS_SQL.NATIVE,
parallel_level => pParallelDegree
);
@@ -1360,8 +1447,8 @@ AS
-- Populate chunks table (insert new chunks, preserve FAILED chunks for retry)
FOR i IN 1 .. vPartitions.COUNT LOOP
MERGE INTO CT_MRDS.A_PARALLEL_EXPORT_CHUNKS t
USING (SELECT i AS chunk_id, vPartitions(i).year AS yr, vPartitions(i).month AS mn FROM DUAL) s
ON (t.CHUNK_ID = s.chunk_id)
USING (SELECT i AS chunk_id, vTaskName AS task_name, vPartitions(i).year AS yr, vPartitions(i).month AS mn FROM DUAL) s
ON (t.CHUNK_ID = s.chunk_id AND t.TASK_NAME = s.task_name)
WHEN NOT MATCHED THEN
INSERT (CHUNK_ID, TASK_NAME, YEAR_VALUE, MONTH_VALUE, SCHEMA_NAME, TABLE_NAME, KEY_COLUMN_NAME,
BUCKET_URI, FOLDER_NAME, PROCESSED_COLUMNS, MIN_DATE, MAX_DATE,
@@ -1370,33 +1457,34 @@ AS
vBucketUri, pFolderName, vProcessedColumnList, pMinDate, pMaxDate,
pCredentialName, 'CSV', vFileBaseName, pTemplateTableName, pMaxFileSize, pJobClass, 'PENDING')
WHEN MATCHED THEN
UPDATE SET TASK_NAME = vTaskName,
STATUS = CASE WHEN t.STATUS = 'FAILED' THEN 'PENDING' ELSE t.STATUS END,
-- Match found: chunk exists for SAME task (composite PK: TASK_NAME, CHUNK_ID)
-- This handles retry scenario: reset FAILED chunks to PENDING for re-processing
UPDATE SET STATUS = CASE WHEN t.STATUS = 'FAILED' THEN 'PENDING' ELSE t.STATUS END,
ERROR_MESSAGE = CASE WHEN t.STATUS = 'FAILED' THEN NULL ELSE t.ERROR_MESSAGE END;
END LOOP;
COMMIT;
-- Log chunk statistics
-- Log chunk statistics (session-safe: only count chunks for THIS task)
DECLARE
vPendingCount NUMBER;
vFailedCount NUMBER;
BEGIN
SELECT COUNT(*) INTO vPendingCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'PENDING';
SELECT COUNT(*) INTO vFailedCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'FAILED';
SELECT COUNT(*) INTO vPendingCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'PENDING' AND TASK_NAME = vTaskName;
SELECT COUNT(*) INTO vFailedCount FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE STATUS = 'FAILED' AND TASK_NAME = vTaskName;
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics: PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters);
ENV_MANAGER.LOG_PROCESS_EVENT('Chunk statistics for task ' || vTaskName || ': PENDING=' || vPendingCount || ', FAILED (retry)=' || vFailedCount, 'INFO', vParameters);
END;
-- Create parallel task
DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => vTaskName);
-- Define chunks by number range (1 to partition count)
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_NUMBER_COL(
-- Define chunks using SQL query to ensure TASK_NAME isolation
-- CRITICAL: Filter by TASK_NAME to avoid selecting chunks from other concurrent sessions
-- CRITICAL: Use START_ID and END_ID aliases to avoid ORA-00960 ambiguous column naming
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(
task_name => vTaskName,
table_owner => 'CT_MRDS',
table_name => 'A_PARALLEL_EXPORT_CHUNKS',
table_column => 'CHUNK_ID',
chunk_size => 1 -- Each partition is one chunk
sql_stmt => 'SELECT CHUNK_ID AS START_ID, CHUNK_ID AS END_ID FROM CT_MRDS.A_PARALLEL_EXPORT_CHUNKS WHERE TASK_NAME = ''' || vTaskName || ''' ORDER BY CHUNK_ID',
by_rowid => FALSE
);
-- Execute task in parallel
@@ -1405,7 +1493,7 @@ AS
IF pJobClass IS NOT NULL THEN
DBMS_PARALLEL_EXECUTE.RUN_TASK(
task_name => vTaskName,
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id); END;',
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id, ''' || vTaskName || '''); END;',
language_flag => DBMS_SQL.NATIVE,
parallel_level => pParallelDegree,
job_class => pJobClass
@@ -1413,7 +1501,7 @@ AS
ELSE
DBMS_PARALLEL_EXECUTE.RUN_TASK(
task_name => vTaskName,
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id); END;',
sql_stmt => 'BEGIN CT_MRDS.DATA_EXPORTER.EXPORT_PARTITION_PARALLEL(:start_id, :end_id, ''' || vTaskName || '''); END;',
language_flag => DBMS_SQL.NATIVE,
parallel_level => pParallelDegree
);

View File

@@ -9,17 +9,17 @@ AS
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.11.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-18 10:00:00';
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.14.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-25 09:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.11.0 (2026-02-18): Added pJobClass parameter to EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE for Oracle Scheduler job class support (resource/priority management).' || CHR(10) ||
'v2.10.1 (2026-02-17): CRITICAL FIX - Remove redundant COMPLETED chunks deletion before parallel export that caused ORA-01403 errors (phantom chunks created by CREATE_CHUNKS_BY_NUMBER_COL).' || CHR(10) ||
'v2.10.0 (2026-02-13): CRITICAL FIX - Register ALL files created by DBMS_CLOUD.EXPORT_DATA (multi-file support due to Oracle parallel processing on large instances). Prevents orphaned files in rollback.' || CHR(10) ||
'v2.9.0 (2026-02-13): Added pProcessName parameter to EXPORT_TABLE_DATA and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures for process tracking in A_SOURCE_FILE_RECEIVED table.' || CHR(10) ||
'v2.8.1 (2026-02-12): FIX query in EXPORT_TABLE_DATA - removed A_LOAD_HISTORY join to ensure single file output (simple SELECT).' || CHR(10);
'v2.14.0 (2026-02-25): OPTIMIZATION - Added pTaskName parameter to EXPORT_PARTITION_PARALLEL for deterministic filtering. Replaced FETCH FIRST 1 ROW ONLY safeguard with precise WHERE CHUNK_ID AND TASK_NAME filter. Eliminates ORDER BY overhead and provides cleaner session isolation.' || CHR(10) ||
'v2.13.1 (2026-02-25): CRITICAL FIX - Added START_ID and END_ID aliasses in CREATE_CHUNKS_BY_SQL to avoid ORA-00960 ambiguous column naming error.' || CHR(10) ||
'v2.13.0 (2026-02-25): CRITICAL SESSION ISOLATION FIX - Changed CREATE_CHUNKS_BY_NUMBER_COL to CREATE_CHUNKS_BY_SQL with TASK_NAME filter (fixes ORA-01422 in concurrent sessions). Added ORDER BY CREATED_DATE DESC FETCH FIRST 1 ROW safeguard to EXPORT_PARTITION_PARALLEL SELECT. Composite PK (TASK_NAME, CHUNK_ID) now fully functional.' || CHR(10) ||
'v2.12.0 (2026-02-24): CRITICAL FIX - Rewritten DELETE_FAILED_EXPORT_FILE to use file-specific pattern matching (prevents deleting parallel CSV chunks in shared folder). Added vQuery logging before DBMS_CLOUD calls. Added CSV maxfilesize logging.' || CHR(10) ||
'v2.11.0 (2026-02-18): Added pJobClass parameter to EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE for Oracle Scheduler job class support (resource/priority management).' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
@@ -54,10 +54,12 @@ AS
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
* @param pTaskName - Task name for session isolation (optional, DEFAULT NULL for backward compatibility)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
pEndId IN NUMBER,
pTaskName IN VARCHAR2 DEFAULT NULL
);
---------------------------------------------------------------------------------------------------------------------------

View File

@@ -1,125 +1,31 @@
--=============================================================================================================================
-- MARS-835: Export Group 1 - Split DATA + HIST (DEBT, DEBT_DAILY)
-- MARS-835: Export Group 1 - HIST Only (DEBT, DEBT_DAILY)
--=============================================================================================================================
-- Purpose: Export last 6 months to DATA bucket (CSV), older data to HIST bucket (Parquet)
-- Purpose: Export ALL data to HIST bucket (Parquet with Hive-style partitioning)
-- Applies column mapping: A_ETL_LOAD_SET_FK to A_WORKFLOW_HISTORY_KEY
-- Excludes legacy columns not required in new structure
-- USES: DATA_EXPORTER v2.4.0 with pTemplateTableName for column order and date formats
-- USES: DATA_EXPORTER v2.12.0 with pTemplateTableName for column order and date formats
-- Author: Grzegorz Michalski
-- Date: 2025-12-17
-- Updated: 2026-01-11 (Updated to DATA_EXPORTER v2.4.0 with pTemplateTableName)
-- Updated: 2026-02-24 (Changed to HIST-only export, no DATA bucket split)
-- Related: MARS-835 - CSDB Data Export
--=============================================================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET TIMING ON
DEFINE cutoff_date = "TRUNC(ADD_MONTHS(SYSDATE, -6), 'MM')"
PROMPT ========================================================================
PROMPT Exporting CSDB.DEBT - Split DATA + HIST
PROMPT Exporting CSDB.DEBT - HIST Only
PROMPT ========================================================================
PROMPT Last 6 months to DATA bucket (CSV format)
PROMPT Older data to HIST bucket (Parquet with partitioning)
PROMPT ALL data to HIST bucket (Parquet with Hive-style partitioning)
PROMPT Column mapping: A_ETL_LOAD_SET_FK to A_WORKFLOW_HISTORY_KEY
PROMPT Excluded columns: IDIRDEPOSITORY, VA_BONDDURATION
PROMPT ========================================================================
-- PRE-EXPORT CHECK: List existing files and count records
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
-- Export ALL data to HIST bucket (Parquet)
-- NEW v2.12.0: Per-column date format handling with template table, full data range
BEGIN
-- Get bucket URI for DATA bucket
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/CSDB/CSDB_DEBT/';
-- Count existing files
SELECT COUNT(*)
INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => 'OCI$RESOURCE_PRINCIPAL',
location_uri => vLocationUri
))
WHERE object_name NOT LIKE '%/'; -- Exclude directories
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: Files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri);
DBMS_OUTPUT.PUT_LINE('Files found: ' || vFileCount);
DBMS_OUTPUT.PUT_LINE('');
-- List existing files
DBMS_OUTPUT.PUT_LINE('Existing files:');
FOR rec IN (
SELECT object_name, bytes, TO_CHAR(last_modified, 'YYYY-MM-DD HH24:MI:SS') AS modified
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => 'OCI$RESOURCE_PRINCIPAL',
location_uri => vLocationUri
))
WHERE object_name NOT LIKE '%/'
ORDER BY object_name
) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes, ' || rec.modified || ')');
END LOOP;
-- Count records in external table
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.CSDB_DEBT_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('>>>');
DBMS_OUTPUT.PUT_LINE('>>> Records currently readable via external table: ' || vRecordCount);
DBMS_OUTPUT.PUT_LINE('>>>');
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count records in external table');
DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing files found in DATA bucket - bucket is clean');
DBMS_OUTPUT.PUT_LINE('');
END IF;
END;
/
-- Export recent data to DATA bucket (CSV)
-- NEW v2.4.0: Per-column date format handling with template table for column order
BEGIN
DBMS_OUTPUT.PUT_LINE('Exporting LEGACY_DEBT data to DATA bucket (last 6 months)...');
DBMS_OUTPUT.PUT_LINE('Using Template Table: CT_ET_TEMPLATES.CSDB_DEBT');
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
pSchemaName => 'OU_CSDB',
pTableName => 'LEGACY_DEBT',
pKeyColumnName => 'A_ETL_LOAD_SET_FK',
pBucketArea => 'DATA',
pFolderName => 'ODS/CSDB/CSDB_DEBT',
pMinDate => &cutoff_date,
pMaxDate => DATE '9999-12-31', -- Include future dates (MAX_LOAD_START can be beyond SYSDATE)
pParallelDegree => 16,
pTemplateTableName => 'CT_ET_TEMPLATES.CSDB_DEBT',
pMaxFileSize => 104857600, -- 100MB in bytes (safe for parallel execution, avoids ORA-04036)
pRegisterExport => TRUE, -- Register exported files in A_SOURCE_FILE_RECEIVED with metadata (CHECKSUM, CREATED, BYTES)
pProcessName => 'MARS-835', -- Process identifier for tracking
pJobClass => 'high' -- Oracle Scheduler job class for resource management
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: LEGACY_DEBT exported to DATA bucket with template column order');
END;
/
-- Export historical data to HIST bucket (Parquet)
-- NEW v2.4.0: Per-column date format handling with template table
BEGIN
DBMS_OUTPUT.PUT_LINE('Exporting LEGACY_DEBT data to HIST bucket (older than 6 months)...');
DBMS_OUTPUT.PUT_LINE('Exporting LEGACY_DEBT data to HIST bucket (ALL data)...');
DBMS_OUTPUT.PUT_LINE('Using Template Table: CT_ET_TEMPLATES.CSDB_DEBT');
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
@@ -128,7 +34,8 @@ BEGIN
pKeyColumnName => 'A_ETL_LOAD_SET_FK',
pBucketArea => 'ARCHIVE',
pFolderName => 'ARCHIVE/CSDB/CSDB_DEBT',
pMaxDate => &cutoff_date,
pMinDate => DATE '1900-01-01', -- Include all historical data
pMaxDate => DATE '9999-12-31', -- Include all future dates
pParallelDegree => 16,
pTemplateTableName => 'CT_ET_TEMPLATES.CSDB_DEBT',
pJobClass => 'high' -- Oracle Scheduler job class for resource management
@@ -139,110 +46,18 @@ END;
/
PROMPT ========================================================================
PROMPT Exporting CSDB.LEGACY_DEBT_DAILY - Split DATA + HIST
PROMPT Exporting CSDB.LEGACY_DEBT_DAILY - HIST Only
PROMPT ========================================================================
PROMPT Last 6 months to DATA bucket (CSV format)
PROMPT Older data to HIST bucket (Parquet with partitioning)
PROMPT ALL data to HIST bucket (Parquet with Hive-style partitioning)
PROMPT Column mapping: A_ETL_LOAD_SET_FK to A_WORKFLOW_HISTORY_KEY
PROMPT Excluded columns: STEPID, PROGRAMNAME, PROGRAMCEILING, PROGRAMSTATUS,
PROMPT ISSUERNACE21SECTOR, INSTRUMENTQUOTATIONBASIS
PROMPT ========================================================================
-- PRE-EXPORT CHECK: List existing files and count records
DECLARE
vFileCount NUMBER := 0;
vRecordCount NUMBER := 0;
vLocationUri VARCHAR2(1000);
-- Export ALL data to HIST bucket (Parquet)
-- NEW v2.12.0: Per-column date format handling with template table, full data range
BEGIN
-- Get bucket URI for DATA bucket
vLocationUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA') || 'ODS/CSDB/CSDB_DEBT_DAILY/';
-- Count existing files
SELECT COUNT(*)
INTO vFileCount
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => 'OCI$RESOURCE_PRINCIPAL',
location_uri => vLocationUri
))
WHERE object_name NOT LIKE '%/'; -- Exclude directories
IF vFileCount > 0 THEN
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: Files already exist in DATA bucket');
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('Location: ' || vLocationUri);
DBMS_OUTPUT.PUT_LINE('Files found: ' || vFileCount);
DBMS_OUTPUT.PUT_LINE('');
-- List existing files
DBMS_OUTPUT.PUT_LINE('Existing files:');
FOR rec IN (
SELECT object_name, bytes, TO_CHAR(last_modified, 'YYYY-MM-DD HH24:MI:SS') AS modified
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => 'OCI$RESOURCE_PRINCIPAL',
location_uri => vLocationUri
))
WHERE object_name NOT LIKE '%/'
ORDER BY object_name
) LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.object_name || ' (' || rec.bytes || ' bytes, ' || rec.modified || ')');
END LOOP;
-- Count records in external table
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ODS.CSDB_DEBT_DAILY_ODS' INTO vRecordCount;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('>>>');
DBMS_OUTPUT.PUT_LINE('>>> Records currently readable via external table: ' || vRecordCount);
DBMS_OUTPUT.PUT_LINE('>>>');
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('WARNING: Cannot count records in external table');
DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM);
END;
DBMS_OUTPUT.PUT_LINE('===============================================================================');
DBMS_OUTPUT.PUT_LINE('');
ELSE
DBMS_OUTPUT.PUT_LINE('PRE-EXPORT CHECK: No existing files found in DATA bucket - bucket is clean');
DBMS_OUTPUT.PUT_LINE('');
END IF;
END;
/
-- Export recent data to DATA bucket (CSV)
-- NEW v2.4.0: Per-column date format handling with template table for column order
BEGIN
DBMS_OUTPUT.PUT_LINE('Exporting LEGACY_DEBT_DAILY data to DATA bucket (last 6 months)...');
DBMS_OUTPUT.PUT_LINE('Using Template Table: CT_ET_TEMPLATES.CSDB_DEBT_DAILY');
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
pSchemaName => 'OU_CSDB',
pTableName => 'LEGACY_DEBT_DAILY',
pKeyColumnName => 'A_ETL_LOAD_SET_FK',
pBucketArea => 'DATA',
pFolderName => 'ODS/CSDB/CSDB_DEBT_DAILY',
pMinDate => &cutoff_date,
pMaxDate => DATE '9999-12-31', -- Include future dates (MAX_LOAD_START can be beyond SYSDATE)
pParallelDegree => 16,
pTemplateTableName => 'CT_ET_TEMPLATES.CSDB_DEBT_DAILY',
pMaxFileSize => 104857600, -- 100MB in bytes (safe for parallel execution, avoids ORA-04036)
pRegisterExport => TRUE, -- Register exported files in A_SOURCE_FILE_RECEIVED with metadata (CHECKSUM, CREATED, BYTES)
pProcessName => 'MARS-835', -- Process identifier for tracking
pJobClass => 'high' -- Oracle Scheduler job class for resource management
);
DBMS_OUTPUT.PUT_LINE('SUCCESS: LEGACY_DEBT_DAILY exported to DATA bucket with template column order');
END;
/
-- Export historical data to HIST bucket (Parquet)
-- NEW v2.4.0: Per-column date format handling with template table
BEGIN
DBMS_OUTPUT.PUT_LINE('Exporting LEGACY_DEBT_DAILY data to HIST bucket (older than 6 months)...');
DBMS_OUTPUT.PUT_LINE('Exporting LEGACY_DEBT_DAILY data to HIST bucket (ALL data)...');
DBMS_OUTPUT.PUT_LINE('Using Template Table: CT_ET_TEMPLATES.CSDB_DEBT_DAILY');
CT_MRDS.DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
@@ -251,7 +66,8 @@ BEGIN
pKeyColumnName => 'A_ETL_LOAD_SET_FK',
pBucketArea => 'ARCHIVE',
pFolderName => 'ARCHIVE/CSDB/CSDB_DEBT_DAILY',
pMaxDate => &cutoff_date,
pMinDate => DATE '1900-01-01', -- Include all historical data
pMaxDate => DATE '9999-12-31', -- Include all future dates
pParallelDegree => 16,
pTemplateTableName => 'CT_ET_TEMPLATES.CSDB_DEBT_DAILY',
pJobClass => 'high' -- Oracle Scheduler job class for resource management
@@ -264,8 +80,8 @@ END;
PROMPT ========================================================================
PROMPT Group 1 Export Completed
PROMPT ========================================================================
PROMPT - LEGACY_DEBT: DATA + HIST exported
PROMPT - LEGACY_DEBT_DAILY: DATA + HIST exported
PROMPT - LEGACY_DEBT: HIST exported (ALL data)
PROMPT - LEGACY_DEBT_DAILY: HIST exported (ALL data)
PROMPT ========================================================================
--=============================================================================================================================

View File

@@ -1,10 +1,11 @@
-- =====================================================================================
-- Script: 03_MARS_835_verify_exports.sql
-- Purpose: Verify exported files exist in DATA and HIST buckets after export
-- Purpose: Verify exported files exist in HIST bucket after export (HIST-only strategy)
-- Author: Grzegorz Michalski
-- Created: 2025-12-17
-- Updated: 2026-02-24 (Changed to HIST-only verification)
-- MARS Issue: MARS-835
-- Target Locations: mrds_data_dev/ODS/CSDB/, mrds_hist_dev/ARCHIVE/CSDB/
-- Target Locations: mrds_hist_dev/ARCHIVE/CSDB/
-- =====================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED;
@@ -13,17 +14,14 @@ SET VERIFY OFF;
SET LINESIZE 200;
PROMPT =====================================================================================
PROMPT MARS-835 Verification: Listing exported files in DATA and HIST buckets
PROMPT MARS-835 Verification: Listing exported files in HIST bucket (HIST-only strategy)
PROMPT =====================================================================================
DECLARE
vDataBucketUri VARCHAR2(500);
vHistBucketUri VARCHAR2(500);
vCredentialName VARCHAR2(100);
vFileCount NUMBER := 0;
vTotalDataFiles NUMBER := 0;
vTotalHistFiles NUMBER := 0;
vTotalDataSize NUMBER := 0;
vTotalHistSize NUMBER := 0;
TYPE t_folder_info IS RECORD (
@@ -33,25 +31,18 @@ DECLARE
);
TYPE t_folder_list IS TABLE OF t_folder_info;
vDataFolders t_folder_list;
vHistFolders t_folder_list;
BEGIN
-- Get bucket URIs and credential from FILE_MANAGER
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
-- Get bucket URI and credential from FILE_MANAGER
vHistBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE');
vCredentialName := CT_MRDS.ENV_MANAGER.gvCredentialName;
DBMS_OUTPUT.PUT_LINE('VERIFICATION TIME: ' || TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS.FF3'));
DBMS_OUTPUT.PUT_LINE('DATA Bucket URI: ' || vDataBucketUri);
DBMS_OUTPUT.PUT_LINE('HIST Bucket URI: ' || vHistBucketUri);
DBMS_OUTPUT.PUT_LINE('');
-- Initialize folder lists
vDataFolders := t_folder_list(
t_folder_info('ODS/CSDB/CSDB_DEBT/', 'DEBT', 'CSV'),
t_folder_info('ODS/CSDB/CSDB_DEBT_DAILY/', 'DEBT_DAILY', 'CSV')
);
-- Initialize folder list (all tables in HIST)
-- Initialize folder list (all 6 tables in HIST)
vHistFolders := t_folder_list(
t_folder_info('ARCHIVE/CSDB/CSDB_DEBT/', 'DEBT', 'Parquet'),
t_folder_info('ARCHIVE/CSDB/CSDB_DEBT_DAILY/', 'DEBT_DAILY', 'Parquet'),
@@ -62,49 +53,7 @@ BEGIN
);
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Checking DATA Bucket Exports (CSV format - last 6 months)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
-- Check DATA bucket exports
FOR i IN 1..vDataFolders.COUNT LOOP
vFileCount := 0;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Table: ' || vDataFolders(i).table_name || ' (' || vDataFolders(i).expected_format || ')');
DBMS_OUTPUT.PUT_LINE('Folder: ' || vDataFolders(i).folder_name);
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------------');
BEGIN
FOR rec IN (
SELECT object_name, bytes, TO_CHAR(created, 'YYYY-MM-DD HH24:MI:SS') AS created_date
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => vCredentialName,
location_uri => vDataBucketUri || vDataFolders(i).folder_name
))
WHERE object_name LIKE '%.csv'
ORDER BY created DESC
) LOOP
vFileCount := vFileCount + 1;
vTotalDataFiles := vTotalDataFiles + 1;
vTotalDataSize := vTotalDataSize + rec.bytes;
DBMS_OUTPUT.PUT_LINE(' [' || vFileCount || '] ' || rec.object_name ||
' (' || ROUND(rec.bytes/1024/1024, 2) || ' MB) - ' || rec.created_date);
END LOOP;
IF vFileCount = 0 THEN
DBMS_OUTPUT.PUT_LINE(' [ERROR] No CSV files found - Export may have failed!');
ELSE
DBMS_OUTPUT.PUT_LINE(' [SUCCESS] Found ' || vFileCount || ' CSV file(s)');
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(' [ERROR] Cannot access folder - ' || SQLERRM);
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Checking HIST Bucket Exports (Parquet with Hive partitioning)');
DBMS_OUTPUT.PUT_LINE('Checking HIST Bucket Exports (Parquet with Hive partitioning - ALL data)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
-- Check HIST bucket exports
@@ -155,24 +104,19 @@ BEGIN
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Export Verification Summary');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('DATA Bucket (CSV):');
DBMS_OUTPUT.PUT_LINE(' - Total files: ' || vTotalDataFiles);
DBMS_OUTPUT.PUT_LINE(' - Total size: ' || ROUND(vTotalDataSize/1024/1024/1024, 2) || ' GB');
DBMS_OUTPUT.PUT_LINE(' - Expected tables: 2 (DEBT, DEBT_DAILY - last 6 months)');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('HIST Bucket (Parquet):');
DBMS_OUTPUT.PUT_LINE('HIST Bucket (Parquet - HIST-only strategy):');
DBMS_OUTPUT.PUT_LINE(' - Total files: ' || vTotalHistFiles || '+');
DBMS_OUTPUT.PUT_LINE(' - Total size: ' || ROUND(vTotalHistSize/1024/1024/1024, 2) || '+ GB (sample)');
DBMS_OUTPUT.PUT_LINE(' - Expected tables: 6 (all CSDB tables with historical data)');
DBMS_OUTPUT.PUT_LINE(' - Expected tables: 6 (all CSDB tables exported to HIST)');
DBMS_OUTPUT.PUT_LINE('');
IF vTotalDataFiles >= 2 AND vTotalHistFiles >= 6 THEN
IF vTotalHistFiles >= 6 THEN
DBMS_OUTPUT.PUT_LINE('[SUCCESS] OVERALL STATUS: Export appears SUCCESSFUL');
DBMS_OUTPUT.PUT_LINE(' Files found in both DATA and HIST buckets');
DBMS_OUTPUT.PUT_LINE(' Files found in HIST bucket for all tables');
DBMS_OUTPUT.PUT_LINE(' Proceed to record count verification (Step 4)');
ELSIF vTotalDataFiles = 0 AND vTotalHistFiles = 0 THEN
ELSIF vTotalHistFiles = 0 THEN
DBMS_OUTPUT.PUT_LINE('[FAILED] OVERALL STATUS: Export FAILED');
DBMS_OUTPUT.PUT_LINE(' No files found in either bucket');
DBMS_OUTPUT.PUT_LINE(' No files found in HIST bucket');
DBMS_OUTPUT.PUT_LINE(' Review export logs for errors');
ELSE
DBMS_OUTPUT.PUT_LINE('[WARNING] OVERALL STATUS: Partial export detected');

View File

@@ -1,10 +1,11 @@
-- =====================================================================================
-- Script: 04_MARS_835_verify_record_counts.sql
-- Purpose: Verify record counts match between source tables and exported data
-- Purpose: Verify record counts match between source tables and exported data (HIST-only)
-- Author: Grzegorz Michalski
-- Created: 2025-12-17
-- Updated: 2026-02-24 (Changed to HIST-only verification)
-- MARS Issue: MARS-835
-- Verification: Compare OU_CSDB source tables with ODS external tables
-- Verification: Compare OU_CSDB source tables with ODS external tables (HIST only)
-- =====================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED;
@@ -13,28 +14,23 @@ SET VERIFY OFF;
SET LINESIZE 200;
PROMPT =====================================================================================
PROMPT MARS-835 Record Count Verification
PROMPT MARS-835 Record Count Verification (HIST-only strategy)
PROMPT =====================================================================================
PROMPT Comparing source table counts with exported external table counts
PROMPT Comparing source table counts with HIST external table counts
PROMPT =====================================================================================
DECLARE
TYPE t_table_info IS RECORD (
source_schema VARCHAR2(50),
source_table VARCHAR2(100),
data_external_table VARCHAR2(100),
hist_external_table VARCHAR2(100),
has_data_export BOOLEAN,
has_hist_export BOOLEAN
hist_external_table VARCHAR2(100)
);
TYPE t_table_list IS TABLE OF t_table_info;
vTables t_table_list;
vSourceCount NUMBER;
vDataCount NUMBER;
vHistCount NUMBER;
vTotalSourceCount NUMBER := 0;
vTotalDataCount NUMBER := 0;
vTotalHistCount NUMBER := 0;
vMismatchCount NUMBER := 0;
vSql VARCHAR2(4000);
@@ -42,18 +38,18 @@ BEGIN
DBMS_OUTPUT.PUT_LINE('VERIFICATION TIME: ' || TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS'));
DBMS_OUTPUT.PUT_LINE('');
-- Initialize table list with export configuration
-- Initialize table list (all tables HIST-only)
vTables := t_table_list(
t_table_info('OU_CSDB', 'LEGACY_DEBT', 'ODS.CSDB_DEBT_ODS', 'ODS.CSDB_DEBT_ARCHIVE', TRUE, TRUE),
t_table_info('OU_CSDB', 'LEGACY_DEBT_DAILY', 'ODS.CSDB_DEBT_DAILY_ODS', 'ODS.CSDB_DEBT_DAILY_ARCHIVE', TRUE, TRUE),
t_table_info('OU_CSDB', 'LEGACY_INSTR_RAT_FULL', NULL, 'ODS.CSDB_INSTR_RAT_FULL_ARCHIVE', FALSE, TRUE),
t_table_info('OU_CSDB', 'LEGACY_INSTR_DESC_FULL', NULL, 'ODS.CSDB_INSTR_DESC_FULL_ARCHIVE', FALSE, TRUE),
t_table_info('OU_CSDB', 'LEGACY_ISSUER_RAT_FULL', NULL, 'ODS.CSDB_ISSUER_RAT_FULL_ARCHIVE', FALSE, TRUE),
t_table_info('OU_CSDB', 'LEGACY_ISSUER_DESC_FULL', NULL, 'ODS.CSDB_ISSUER_DESC_FULL_ARCHIVE', FALSE, TRUE)
t_table_info('OU_CSDB', 'LEGACY_DEBT', 'ODS.CSDB_DEBT_ARCHIVE'),
t_table_info('OU_CSDB', 'LEGACY_DEBT_DAILY', 'ODS.CSDB_DEBT_DAILY_ARCHIVE'),
t_table_info('OU_CSDB', 'LEGACY_INSTR_RAT_FULL', 'ODS.CSDB_INSTR_RAT_FULL_ARCHIVE'),
t_table_info('OU_CSDB', 'LEGACY_INSTR_DESC_FULL', 'ODS.CSDB_INSTR_DESC_FULL_ARCHIVE'),
t_table_info('OU_CSDB', 'LEGACY_ISSUER_RAT_FULL', 'ODS.CSDB_ISSUER_RAT_FULL_ARCHIVE'),
t_table_info('OU_CSDB', 'LEGACY_ISSUER_DESC_FULL', 'ODS.CSDB_ISSUER_DESC_FULL_ARCHIVE')
);
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('Table Name Source Count DATA Count HIST Count Status');
DBMS_OUTPUT.PUT_LINE('Table Name Source Count HIST Count Status');
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
FOR i IN 1..vTables.COUNT LOOP
@@ -70,31 +66,6 @@ BEGIN
CONTINUE;
END;
-- Get DATA external table count (if applicable)
IF vTables(i).has_data_export THEN
vSql := 'SELECT COUNT(*) FROM ' || vTables(i).data_external_table;
BEGIN
EXECUTE IMMEDIATE vSql INTO vDataCount;
vTotalDataCount := vTotalDataCount + vDataCount;
EXCEPTION
WHEN OTHERS THEN
-- If source table is empty (0 records), no files were exported
-- External table returns error, treat as 0
-- Acceptable error codes:
-- ORA-29913: error in executing ODCIEXTTABLEOPEN callout
-- ORA-29400: data cartridge error
-- KUP-13023: nothing matched wildcard query (no files in bucket)
-- NOTE: ORA-30653 (reject limit) is a real data quality error, not treated as empty
IF vSourceCount = 0 OR SQLCODE IN (-29913, -29400) OR SQLERRM LIKE '%KUP-13023%' THEN
vDataCount := 0;
ELSE
vDataCount := -1;
END IF;
END;
ELSE
vDataCount := NULL;
END IF;
-- Get HIST external table count
vSql := 'SELECT COUNT(*) FROM ' || vTables(i).hist_external_table;
BEGIN
@@ -119,18 +90,8 @@ BEGIN
-- Display results
DECLARE
vStatus VARCHAR2(20);
vDataDisplay VARCHAR2(17);
vHistDisplay VARCHAR2(17);
BEGIN
-- Format DATA count display
IF vDataCount IS NULL THEN
vDataDisplay := 'N/A';
ELSIF vDataCount = -1 THEN
vDataDisplay := 'ERROR';
ELSE
vDataDisplay := TO_CHAR(vDataCount, '9,999,999,999');
END IF;
-- Format HIST count display
IF vHistCount = -1 THEN
vHistDisplay := 'ERROR';
@@ -138,35 +99,20 @@ BEGIN
vHistDisplay := TO_CHAR(vHistCount, '9,999,999,999');
END IF;
-- Determine status
IF vTables(i).has_data_export THEN
-- Split export: check DATA + HIST = SOURCE
IF (vDataCount + vHistCount) = vSourceCount THEN
vStatus := 'PASS';
ELSIF vDataCount = -1 OR vHistCount = -1 THEN
vStatus := 'ERROR';
vMismatchCount := vMismatchCount + 1;
ELSE
vStatus := 'MISMATCH';
vMismatchCount := vMismatchCount + 1;
END IF;
-- Determine status (HIST only: check HIST = SOURCE)
IF vHistCount = vSourceCount THEN
vStatus := 'PASS';
ELSIF vHistCount = -1 THEN
vStatus := 'ERROR';
vMismatchCount := vMismatchCount + 1;
ELSE
-- HIST only: check HIST = SOURCE
IF vHistCount = vSourceCount THEN
vStatus := 'PASS';
ELSIF vHistCount = -1 THEN
vStatus := 'ERROR';
vMismatchCount := vMismatchCount + 1;
ELSE
vStatus := 'MISMATCH';
vMismatchCount := vMismatchCount + 1;
END IF;
vStatus := 'MISMATCH';
vMismatchCount := vMismatchCount + 1;
END IF;
DBMS_OUTPUT.PUT_LINE(
RPAD(vTables(i).source_table, 24) ||
LPAD(TO_CHAR(vSourceCount, '9,999,999,999'), 15) ||
LPAD(vDataDisplay, 15) ||
LPAD(vHistDisplay, 15) || ' ' ||
vStatus
);
@@ -177,18 +123,16 @@ BEGIN
DBMS_OUTPUT.PUT_LINE(
RPAD('TOTALS', 24) ||
LPAD(TO_CHAR(vTotalSourceCount, '9,999,999,999'), 15) ||
LPAD(TO_CHAR(vTotalDataCount, '9,999,999,999'), 15) ||
LPAD(TO_CHAR(vTotalHistCount, '9,999,999,999'), 15)
);
DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Record Count Verification Summary');
DBMS_OUTPUT.PUT_LINE('Record Count Verification Summary (HIST-only strategy)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Total source records: ' || TO_CHAR(vTotalSourceCount, '9,999,999,999'));
DBMS_OUTPUT.PUT_LINE('Total DATA records: ' || TO_CHAR(vTotalDataCount, '9,999,999,999') || ' (last 6 months)');
DBMS_OUTPUT.PUT_LINE('Total HIST records: ' || TO_CHAR(vTotalHistCount, '9,999,999,999') || ' (historical + full exports)');
DBMS_OUTPUT.PUT_LINE('Total HIST records: ' || TO_CHAR(vTotalHistCount, '9,999,999,999') || ' (all data in HIST)');
DBMS_OUTPUT.PUT_LINE('');
IF vMismatchCount = 0 THEN
@@ -209,7 +153,6 @@ BEGIN
DBMS_OUTPUT.PUT_LINE(' MISMATCH - Record counts differ (may be pre-existing files or export issue)');
DBMS_OUTPUT.PUT_LINE(' Check pre-check results to identify pre-existing files');
DBMS_OUTPUT.PUT_LINE(' ERROR - Cannot access table (may not exist yet)');
DBMS_OUTPUT.PUT_LINE(' N/A - Not applicable (table not exported to DATA)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
EXCEPTION

View File

@@ -1,68 +1,34 @@
--=============================================================================================================================
-- MARS-835 ROLLBACK: Delete Group 1 Exported Files (DEBT, DEBT_DAILY)
--=============================================================================================================================
-- Purpose: Delete exported CSV and Parquet files from DATA and HIST buckets
-- Purpose: Delete exported Parquet files from HIST bucket (ARCHIVE only)
-- WARNING: This will permanently delete exported data files!
-- Author: Grzegorz Michalski
-- Date: 2025-12-17
-- Updated: 2026-02-24 (Changed to HIST-only rollback, no DATA bucket)
-- Related: MARS-835 - CSDB Data Export Rollback
--=============================================================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT ========================================================================
PROMPT ROLLBACK: Deleting DEBT exported files
PROMPT ROLLBACK: Deleting DEBT exported files from HIST
PROMPT ========================================================================
PROMPT WARNING: This will delete files from:
PROMPT - DATA bucket: mrds_data_dev/ODS/CSDB/CSDB_DEBT/
PROMPT - HIST bucket: mrds_hist_dev/ARCHIVE/CSDB/CSDB_DEBT/
PROMPT ========================================================================
DECLARE
vDataBucketUri VARCHAR2(500);
vHistBucketUri VARCHAR2(500);
vCredentialName VARCHAR2(100);
vFileCount NUMBER := 0;
BEGIN
-- Get bucket URIs and credential
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
-- Get bucket URI and credential
vHistBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE');
vCredentialName := CT_MRDS.ENV_MANAGER.gvCredentialName;
DBMS_OUTPUT.PUT_LINE('Deleting DEBT CSV files from DATA bucket...');
DBMS_OUTPUT.PUT_LINE(' Using DBMS_CLOUD.LIST_OBJECTS to scan bucket');
-- Delete CSV files for DEBT from DATA bucket using LIST_OBJECTS
FOR rec IN (
SELECT object_name
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => vCredentialName,
location_uri => vDataBucketUri || 'ODS/CSDB/CSDB_DEBT/'
))
WHERE object_name LIKE 'LEGACY_DEBT%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
credential_name => vCredentialName,
object_uri => vDataBucketUri || 'ODS/CSDB/CSDB_DEBT/' || rec.object_name
);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.object_name);
vFileCount := vFileCount + 1;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -20404 THEN
DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.object_name);
ELSE
RAISE;
END IF;
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('SUCCESS: DEBT CSV files deleted from DATA bucket (' || vFileCount || ' file(s))');
DBMS_OUTPUT.PUT_LINE('Deleting DEBT Parquet files from ARCHIVE bucket...');
DBMS_OUTPUT.PUT_LINE(' Using DBMS_CLOUD.LIST_OBJECTS (Parquet files not registered)');
vFileCount := 0;
DBMS_OUTPUT.PUT_LINE(' Using DBMS_CLOUD.LIST_OBJECTS');
-- Delete Parquet files from ARCHIVE bucket using DBMS_CLOUD.LIST_OBJECTS
FOR rec IN (
@@ -99,58 +65,23 @@ END;
/
PROMPT ========================================================================
PROMPT ROLLBACK: Deleting DEBT_DAILY exported files
PROMPT ROLLBACK: Deleting DEBT_DAILY exported files from HIST
PROMPT ========================================================================
PROMPT WARNING: This will delete files from:
PROMPT - DATA bucket: mrds_data_dev/ODS/CSDB/CSDB_DEBT_DAILY/
PROMPT - HIST bucket: mrds_hist_dev/ARCHIVE/CSDB/CSDB_DEBT_DAILY/
PROMPT ========================================================================
DECLARE
vDataBucketUri VARCHAR2(500);
vHistBucketUri VARCHAR2(500);
vCredentialName VARCHAR2(100);
vFileCount NUMBER := 0;
BEGIN
-- Get bucket URIs and credential
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
-- Get bucket URI and credential
vHistBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE');
vCredentialName := CT_MRDS.ENV_MANAGER.gvCredentialName;
DBMS_OUTPUT.PUT_LINE('Deleting DEBT_DAILY CSV files from DATA bucket...');
DBMS_OUTPUT.PUT_LINE(' Using DBMS_CLOUD.LIST_OBJECTS to scan bucket');
-- Delete CSV files for DEBT_DAILY from DATA bucket using LIST_OBJECTS
FOR rec IN (
SELECT object_name
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => vCredentialName,
location_uri => vDataBucketUri || 'ODS/CSDB/CSDB_DEBT_DAILY/'
))
WHERE object_name LIKE 'LEGACY_DEBT_DAILY%'
) LOOP
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
credential_name => vCredentialName,
object_uri => vDataBucketUri || 'ODS/CSDB/CSDB_DEBT_DAILY/' || rec.object_name
);
DBMS_OUTPUT.PUT_LINE(' Deleted: ' || rec.object_name);
vFileCount := vFileCount + 1;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -20404 THEN
DBMS_OUTPUT.PUT_LINE(' Skipped (not found): ' || rec.object_name);
ELSE
RAISE;
END IF;
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('SUCCESS: DEBT_DAILY CSV files deleted from DATA bucket (' || vFileCount || ' file(s))');
DBMS_OUTPUT.PUT_LINE('Deleting DEBT_DAILY Parquet files from ARCHIVE bucket...');
DBMS_OUTPUT.PUT_LINE(' Using DBMS_CLOUD.LIST_OBJECTS (Parquet files not registered)');
vFileCount := 0;
DBMS_OUTPUT.PUT_LINE(' Using DBMS_CLOUD.LIST_OBJECTS');
-- Delete Parquet files from ARCHIVE bucket using DBMS_CLOUD.LIST_OBJECTS
FOR rec IN (

View File

@@ -1,10 +1,11 @@
-- =====================================================================================
-- Script: 99_MARS_835_verify_rollback.sql
-- Purpose: Verify all exported files have been deleted from DATA and HIST buckets
-- Purpose: Verify all exported files have been deleted from HIST bucket (HIST-only strategy)
-- Author: Grzegorz Michalski
-- Created: 2025-12-17
-- Updated: 2026-02-24 (Changed to HIST-only verification)
-- MARS Issue: MARS-835
-- Verification: Confirm complete rollback (no CSDB files remaining)
-- Verification: Confirm complete rollback (no CSDB files remaining in HIST)
-- =====================================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED;
@@ -19,33 +20,23 @@ PROMPT Checking that all CSDB export files have been deleted
PROMPT =====================================================================================
DECLARE
vDataBucketUri VARCHAR2(500);
vHistBucketUri VARCHAR2(500);
vCredentialName VARCHAR2(100);
vDataFileCount NUMBER := 0;
vHistFileCount NUMBER := 0;
vTotalFiles NUMBER := 0;
TYPE t_folder_list IS TABLE OF VARCHAR2(200);
vDataFolders t_folder_list;
vHistFolders t_folder_list;
BEGIN
-- Get bucket URIs
vDataBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('DATA');
-- Get bucket URI
vHistBucketUri := CT_MRDS.FILE_MANAGER.GET_BUCKET_URI('ARCHIVE');
vCredentialName := CT_MRDS.ENV_MANAGER.gvCredentialName;
DBMS_OUTPUT.PUT_LINE('ROLLBACK VERIFICATION TIME: ' || TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS.FF3'));
DBMS_OUTPUT.PUT_LINE('DATA Bucket URI: ' || vDataBucketUri);
DBMS_OUTPUT.PUT_LINE('HIST Bucket URI: ' || vHistBucketUri);
DBMS_OUTPUT.PUT_LINE('');
-- Initialize folder lists
vDataFolders := t_folder_list(
'ODS/CSDB/CSDB_DEBT/',
'ODS/CSDB/CSDB_DEBT_DAILY/'
);
-- Initialize folder list (all 6 tables in HIST)
-- Initialize folder list (all 6 tables in HIST)
vHistFolders := t_folder_list(
'ARCHIVE/CSDB/CSDB_DEBT/',
'ARCHIVE/CSDB/CSDB_DEBT_DAILY/',
@@ -55,47 +46,6 @@ BEGIN
'ARCHIVE/CSDB/CSDB_ISSUER_DESC_FULL/'
);
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Checking DATA Bucket (should be empty)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
-- Check DATA bucket
FOR i IN 1..vDataFolders.COUNT LOOP
DECLARE
vCount NUMBER := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Folder: ' || vDataFolders(i));
FOR rec IN (
SELECT object_name
FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(
credential_name => vCredentialName,
location_uri => vDataBucketUri || vDataFolders(i)
))
WHERE object_name LIKE '%.csv'
) LOOP
vCount := vCount + 1;
vDataFileCount := vDataFileCount + 1;
DBMS_OUTPUT.PUT_LINE(' [FOUND] ' || rec.object_name);
END LOOP;
IF vCount = 0 THEN
DBMS_OUTPUT.PUT_LINE(' [OK] No CSV files found');
ELSE
DBMS_OUTPUT.PUT_LINE(' [INFO] Found ' || vCount || ' file(s) - may be pre-existing files from before installation');
END IF;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -20404 THEN
DBMS_OUTPUT.PUT_LINE(' [OK] Folder does not exist or is empty');
ELSE
DBMS_OUTPUT.PUT_LINE(' [ERROR] ' || SQLERRM);
END IF;
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Checking HIST Bucket (should be empty)');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
@@ -139,24 +89,21 @@ BEGIN
END;
END LOOP;
vTotalFiles := vDataFileCount + vHistFileCount;
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('Rollback Verification Summary');
DBMS_OUTPUT.PUT_LINE('=====================================================================================');
DBMS_OUTPUT.PUT_LINE('DATA bucket files remaining: ' || vDataFileCount);
DBMS_OUTPUT.PUT_LINE('HIST bucket files remaining: ' || vHistFileCount || '+');
DBMS_OUTPUT.PUT_LINE('Total files found: ' || vTotalFiles || '+');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('');
IF vTotalFiles = 0 THEN
IF vHistFileCount = 0 THEN
DBMS_OUTPUT.PUT_LINE('[PASSED] ROLLBACK VERIFICATION PASSED');
DBMS_OUTPUT.PUT_LINE(' All CSDB export files have been deleted or were not created');
DBMS_OUTPUT.PUT_LINE(' Buckets are clean and ready for re-export if needed');
DBMS_OUTPUT.PUT_LINE(' HIST bucket is clean and ready for re-export if needed');
ELSE
DBMS_OUTPUT.PUT_LINE('[INFO] ROLLBACK VERIFICATION COMPLETED');
DBMS_OUTPUT.PUT_LINE(' Found ' || vTotalFiles || '+ file(s) remaining in buckets');
DBMS_OUTPUT.PUT_LINE(' Found ' || vHistFileCount || '+ file(s) remaining in HIST bucket');
DBMS_OUTPUT.PUT_LINE(' NOTE: These may be pre-existing files from before installation.');
DBMS_OUTPUT.PUT_LINE(' Rollback only deletes files created during this export operation.');
DBMS_OUTPUT.PUT_LINE(' If needed, manually verify and clean up remaining files.');

View File

@@ -0,0 +1,5 @@
# Exclude temporary folders from version control
confluence/
log/
test/
mock_data/

View File

@@ -0,0 +1,249 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Step 01: Update A_WORKFLOW_HISTORY_KEY for existing records
-- ============================================================================
-- Purpose: Populate A_WORKFLOW_HISTORY_KEY for existing A_SOURCE_FILE_RECEIVED records
-- by extracting values from corresponding ODS tables
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Prerequisites:
-- - MARS-1409 installed (A_WORKFLOW_HISTORY_KEY column exists in A_SOURCE_FILE_RECEIVED)
-- - ODS tables contain A_WORKFLOW_HISTORY_KEY and file$name columns
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Updating A_WORKFLOW_HISTORY_KEY for existing A_SOURCE_FILE_RECEIVED records...
DECLARE
vUpdatedTotal NUMBER := 0;
vUpdatedCurrent NUMBER := 0;
vFailedConfigs NUMBER := 0;
vTableNotFound NUMBER := 0;
vSkippedConfigs NUMBER := 0;
vEmptyTables NUMBER := 0;
vHasData NUMBER := 0;
vTableName VARCHAR2(200);
vSQL VARCHAR2(32767);
vRecordsToUpdate NUMBER := 0;
vRemainingTargeted NUMBER := 0;
vTableExists NUMBER := 0;
BEGIN
-- Count total records to update
SELECT COUNT(*) INTO vRecordsToUpdate
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL
AND PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED');
DBMS_OUTPUT.PUT_LINE('Found ' || vRecordsToUpdate || ' records with NULL A_WORKFLOW_HISTORY_KEY');
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
-- Process each INPUT configuration that has records to update
FOR config_rec IN (
SELECT
sfc.A_SOURCE_FILE_CONFIG_KEY,
sfc.A_SOURCE_KEY,
sfc.SOURCE_FILE_ID,
sfc.TABLE_ID,
sfc.TEMPLATE_TABLE_NAME,
(SELECT COUNT(*)
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = sfc.A_SOURCE_FILE_CONFIG_KEY
AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL
AND sfr.PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED')
) AS NULL_COUNT
FROM CT_MRDS.A_SOURCE_FILE_CONFIG sfc
WHERE sfc.SOURCE_FILE_TYPE = 'INPUT'
AND sfc.TABLE_ID IS NOT NULL
ORDER BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID, sfc.TABLE_ID
) LOOP
IF config_rec.NULL_COUNT = 0 THEN
vSkippedConfigs := vSkippedConfigs + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - no records to update');
CONTINUE;
END IF;
BEGIN
-- Construct ODS table name from TABLE_ID (ODS tables have _ODS suffix)
vTableName := 'ODS.' || config_rec.TABLE_ID || '_ODS';
-- Check table existence before attempting dynamic SQL
SELECT COUNT(*) INTO vTableExists
FROM ALL_TABLES
WHERE OWNER = 'ODS'
AND TABLE_NAME = config_rec.TABLE_ID || '_ODS';
IF vTableExists = 0 THEN
vTableNotFound := vTableNotFound + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - ODS table not found: ' || vTableName);
CONTINUE;
END IF;
-- Pre-check: verify ODS table has accessible data (empty external table throws ORA-29913/KUP-05002)
vHasData := 0;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM (SELECT 1 FROM ' || vTableName || ' t WHERE ROWNUM = 1)'
INTO vHasData;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -29913 OR INSTR(SQLERRM, 'KUP-05002') > 0 THEN
NULL; -- vHasData stays 0
ELSE
RAISE;
END IF;
END;
IF vHasData = 0 THEN
vEmptyTables := vEmptyTables + 1;
DBMS_OUTPUT.PUT_LINE('SKIP: Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID ||
') - ODS table has no files at storage location (empty): ' || vTableName);
CONTINUE;
END IF;
DBMS_OUTPUT.PUT_LINE('Processing config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY || '/' || config_rec.SOURCE_FILE_ID || '/' || config_rec.TABLE_ID || ')...');
-- Update using ODS table
-- NO_PARALLEL hint required: ODS external tables (OCI Object Storage) fail with ORA-12801 under parallel query
vSQL :=
'UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'SET A_WORKFLOW_HISTORY_KEY = ( ' ||
' SELECT /*+ NO_PARALLEL(t) */ t.A_WORKFLOW_HISTORY_KEY ' ||
' FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND rownum=1 ' ||
') ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :config_key ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'', ''READY_FOR_INGESTION'', ''INGESTED'', ''ARCHIVED'', ''ARCHIVED_AND_TRASHED'', ''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT /*+ NO_PARALLEL(t) */ 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME ' ||
' AND rownum=1 ' ||
' )';
EXECUTE IMMEDIATE vSQL USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
commit;
vUpdatedCurrent := SQL%ROWCOUNT;
vUpdatedTotal := vUpdatedTotal + vUpdatedCurrent;
IF vUpdatedCurrent > 0 THEN
DBMS_OUTPUT.PUT_LINE(' SUCCESS: Updated ' || vUpdatedCurrent || ' record(s)');
ELSE
DBMS_OUTPUT.PUT_LINE(' INFO: No matching records found in ODS table (files may not be ingested yet)');
END IF;
EXCEPTION
WHEN OTHERS THEN
vFailedConfigs := vFailedConfigs + 1;
DBMS_OUTPUT.PUT_LINE(' ERROR: Unexpected failure for config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (table: ' || vTableName || ')');
DBMS_OUTPUT.PUT_LINE(' Reason: ' || SQLERRM);
-- Continue processing other configurations despite this failure
END;
END LOOP;
COMMIT;
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
DBMS_OUTPUT.PUT_LINE('Update Summary:');
DBMS_OUTPUT.PUT_LINE(' Total records updated: ' || vUpdatedTotal);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (no NULL records): ' || vSkippedConfigs);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (ODS table not found): ' || vTableNotFound);
DBMS_OUTPUT.PUT_LINE(' Configurations skipped (ODS table empty - no files at location): ' || vEmptyTables);
DBMS_OUTPUT.PUT_LINE(' Configurations failed (unexpected errors): ' || vFailedConfigs);
-- Check remaining NULL records - targeted statuses only
SELECT COUNT(*) INTO vRemainingTargeted
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL
AND PROCESSING_STATUS IN ('VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED');
-- Check all remaining NULL records (includes RECEIVED, VALIDATION_FAILED)
SELECT COUNT(*) INTO vRecordsToUpdate
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE A_WORKFLOW_HISTORY_KEY IS NULL;
DBMS_OUTPUT.PUT_LINE(' Remaining NULL records (targeted statuses): ' || vRemainingTargeted);
DBMS_OUTPUT.PUT_LINE(' Remaining NULL records (all statuses): ' || vRecordsToUpdate);
IF vRemainingTargeted > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('NOTE: Some records with targeted statuses still have NULL A_WORKFLOW_HISTORY_KEY.');
DBMS_OUTPUT.PUT_LINE(' This is expected for files not yet ingested into ODS tables');
DBMS_OUTPUT.PUT_LINE(' or ODS tables with a different structure.');
DBMS_OUTPUT.PUT_LINE(' These records will be populated when files are re-processed.');
END IF;
IF vFailedConfigs > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('NOTE: ' || vFailedConfigs || ' configuration(s) failed with unexpected errors.');
DBMS_OUTPUT.PUT_LINE(' Review the ERROR lines above and investigate manually.');
END IF;
DBMS_OUTPUT.PUT_LINE('----------------------------------------');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT Existing workflow keys update completed!
PROMPT
-- ============================================================================
-- Step 2: Set PROCESSING_STATUS = 'INGESTED' for records whose workflow
-- completed successfully (mirrors trigger A_WORKFLOW_HISTORY logic)
-- ============================================================================
PROMPT
PROMPT Updating PROCESSING_STATUS to INGESTED for completed workflows...
DECLARE
vUpdatedIngested NUMBER := 0;
BEGIN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
SET sfr.PROCESSING_STATUS = 'INGESTED',
sfr.PROCESS_NAME = (
SELECT wh.service_name
FROM CT_MRDS.A_WORKFLOW_HISTORY wh
WHERE wh.a_workflow_history_key = sfr.a_workflow_history_key
)
WHERE sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL
AND sfr.PROCESSING_STATUS IN ('READY_FOR_INGESTION')
AND EXISTS (
SELECT 1
FROM CT_MRDS.A_WORKFLOW_HISTORY wh
WHERE wh.a_workflow_history_key = sfr.a_workflow_history_key
AND wh.workflow_successful = 'Y'
);
vUpdatedIngested := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Updated PROCESSING_STATUS to INGESTED: ' || vUpdatedIngested || ' record(s)');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT INGESTED status update completed!
PROMPT

View File

@@ -0,0 +1,373 @@
-- ============================================================================
-- MARS-1409 Diagnostic: Workflow key status after step 09
-- ============================================================================
-- Purpose: For each INPUT config with an ODS table, report:
-- [A] Files present in ODS bucket but NOT registered in A_SOURCE_FILE_RECEIVED
-- [B] Files registered in A_SOURCE_FILE_RECEIVED but NOT in ODS bucket
-- [C] Files present in both - with A_WORKFLOW_HISTORY_KEY populated
-- [D] Files present in both - A_WORKFLOW_HISTORY_KEY still NULL
--
-- Can be run at any time, read-only (no DML).
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET LINESIZE 200
PROMPT
PROMPT ============================================================================
PROMPT Diagnosing workflow key status (ODS bucket vs A_SOURCE_FILE_RECEIVED)
PROMPT ============================================================================
PROMPT
DECLARE
TYPE tStringList IS TABLE OF VARCHAR2(500);
vTableName VARCHAR2(200);
vTableExists NUMBER;
vBucketEmpty BOOLEAN;
vRefCursor SYS_REFCURSOR;
vFileName VARCHAR2(500);
-- Per-config counters
vOnlyInBucket NUMBER;
vOnlyInDB NUMBER;
vInBothWithKey NUMBER;
vInBothNoKey NUMBER;
-- Grand totals
vConfigsChecked NUMBER := 0;
vConfigsWithIssues NUMBER := 0;
vTotalOnlyInBucket NUMBER := 0;
vTotalOnlyInDB NUMBER := 0;
vTotalInBothWithKey NUMBER := 0;
vTotalInBothNoKey NUMBER := 0;
-- How many individual file names to print per category before summarising
cMaxPrint CONSTANT NUMBER := 1000;
vPrinted NUMBER;
FUNCTION IS_EXTERNAL_TABLE_EMPTY_ERROR(
pSqlCode NUMBER,
pSqlErrm VARCHAR2
) RETURN BOOLEAN
IS
BEGIN
RETURN pSqlCode IN (-29913, -29400)
OR INSTR(pSqlErrm, 'KUP-05002') > 0;
END;
BEGIN
FOR config_rec IN (
SELECT sfc.A_SOURCE_FILE_CONFIG_KEY,
sfc.A_SOURCE_KEY,
sfc.SOURCE_FILE_ID,
sfc.TABLE_ID
FROM CT_MRDS.A_SOURCE_FILE_CONFIG sfc
WHERE sfc.SOURCE_FILE_TYPE = 'INPUT'
AND sfc.TABLE_ID IS NOT NULL
ORDER BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID, sfc.TABLE_ID
) LOOP
vTableName := 'ODS.' || config_rec.TABLE_ID || '_ODS';
SELECT COUNT(*) INTO vTableExists
FROM ALL_TABLES
WHERE OWNER = 'ODS'
AND TABLE_NAME = config_rec.TABLE_ID || '_ODS';
IF vTableExists = 0 THEN
CONTINUE;
END IF;
-- Check if the bucket location has any files at all
-- (empty bucket raises ORA-29913 instead of returning 0 rows)
vBucketEmpty := FALSE;
BEGIN
EXECUTE IMMEDIATE
'SELECT COUNT(*) FROM ' || vTableName || ' t WHERE ROWNUM = 1'
INTO vTableExists;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
ELSE
RAISE;
END IF;
END;
IF vBucketEmpty THEN
-- Bucket is empty: nothing in ODS, but registered records are all "not in bucket"
vOnlyInBucket := 0;
SELECT COUNT(*) INTO vOnlyInDB
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = config_rec.A_SOURCE_FILE_CONFIG_KEY
AND sfr.PROCESSING_STATUS IN ('VALIDATED','READY_FOR_INGESTION','INGESTED','ARCHIVED','ARCHIVED_AND_TRASHED','ARCHIVED_AND_PURGED');
vInBothWithKey := 0;
vInBothNoKey := 0;
ELSE
BEGIN
-- ----------------------------------------------------------------
-- [A] In ODS bucket but NOT in A_SOURCE_FILE_RECEIVED
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT t.file$name) ' ||
'FROM ' || vTableName || ' t ' ||
'WHERE t.file$name IS NOT NULL ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
' WHERE sfr.SOURCE_FILE_NAME = t.file$name ' ||
' AND sfr.A_SOURCE_FILE_CONFIG_KEY = :1)'
INTO vOnlyInBucket
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [B] In A_SOURCE_FILE_RECEIVED (targeted statuses) but NOT in ODS bucket
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(*) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vOnlyInDB
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [C] In both, A_WORKFLOW_HISTORY_KEY IS NOT NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothWithKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
-- ----------------------------------------------------------------
-- [D] In both, A_WORKFLOW_HISTORY_KEY IS NULL
-- ----------------------------------------------------------------
EXECUTE IMMEDIATE
'SELECT COUNT(DISTINCT sfr.A_SOURCE_FILE_RECEIVED_KEY) ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME)'
INTO vInBothNoKey
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
vOnlyInBucket := 0;
SELECT COUNT(*) INTO vOnlyInDB
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = config_rec.A_SOURCE_FILE_CONFIG_KEY
AND sfr.PROCESSING_STATUS IN ('VALIDATED','READY_FOR_INGESTION','INGESTED','ARCHIVED','ARCHIVED_AND_TRASHED','ARCHIVED_AND_PURGED');
vInBothWithKey := 0;
vInBothNoKey := 0;
DBMS_OUTPUT.PUT_LINE(' NOTE: ODS bucket became empty/inaccessible during diagnostics for ' || vTableName || '. Falling back to DB-only counts for [B].');
ELSE
RAISE;
END IF;
END;
END IF; -- vBucketEmpty
-- Skip configs with nothing to report
IF vOnlyInBucket = 0 AND vOnlyInDB = 0 AND vInBothWithKey = 0 AND vInBothNoKey = 0 THEN
CONTINUE;
END IF;
vConfigsChecked := vConfigsChecked + 1;
DBMS_OUTPUT.PUT_LINE('Config ' || config_rec.A_SOURCE_FILE_CONFIG_KEY ||
' (' || config_rec.A_SOURCE_KEY ||
'/' || config_rec.SOURCE_FILE_ID ||
'/' || config_rec.TABLE_ID || ')');
DBMS_OUTPUT.PUT_LINE(' [A] In bucket, not registered: ' || vOnlyInBucket);
DBMS_OUTPUT.PUT_LINE(' [B] Registered, not in bucket: ' || vOnlyInDB);
DBMS_OUTPUT.PUT_LINE(' [C] In both, A_WORKFLOW_HISTORY_KEY set: ' || vInBothWithKey);
DBMS_OUTPUT.PUT_LINE(' [D] In both, A_WORKFLOW_HISTORY_KEY NULL: ' || vInBothNoKey);
-- Print individual file names for categories with problems
IF vOnlyInBucket > 0 THEN
DBMS_OUTPUT.PUT_LINE(' [A] Files in bucket not registered:');
vPrinted := 0;
OPEN vRefCursor FOR
'SELECT DISTINCT t.file$name ' ||
'FROM ' || vTableName || ' t ' ||
'WHERE t.file$name IS NOT NULL ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
' WHERE sfr.SOURCE_FILE_NAME = t.file$name ' ||
' AND sfr.A_SOURCE_FILE_CONFIG_KEY = :1) ' ||
'ORDER BY t.file$name'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
LOOP
FETCH vRefCursor INTO vFileName;
EXIT WHEN vRefCursor%NOTFOUND;
vPrinted := vPrinted + 1;
IF vPrinted <= cMaxPrint THEN
DBMS_OUTPUT.PUT_LINE(' ' || vFileName);
ELSIF vPrinted = cMaxPrint + 1 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vOnlyInBucket - cMaxPrint) || ' more');
END IF;
END LOOP;
CLOSE vRefCursor;
END IF;
IF vOnlyInDB > 0 THEN
vConfigsWithIssues := vConfigsWithIssues + 1;
DBMS_OUTPUT.PUT_LINE(' [B] Registered files not found in bucket:');
vPrinted := 0;
BEGIN
IF vBucketEmpty THEN
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
ELSE
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND NOT EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME) ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
END IF;
EXCEPTION
WHEN OTHERS THEN
IF IS_EXTERNAL_TABLE_EMPTY_ERROR(SQLCODE, SQLERRM) THEN
vBucketEmpty := TRUE;
DBMS_OUTPUT.PUT_LINE(' NOTE: Skipping ODS anti-join details due to empty/inaccessible external table for ' || vTableName || '.');
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS, sfr.A_WORKFLOW_HISTORY_KEY ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
ELSE
RAISE;
END IF;
END;
LOOP
DECLARE
vStatus VARCHAR2(50);
vWfKey NUMBER;
BEGIN
FETCH vRefCursor INTO vFileName, vStatus, vWfKey;
EXIT WHEN vRefCursor%NOTFOUND;
vPrinted := vPrinted + 1;
IF vPrinted <= cMaxPrint THEN
DBMS_OUTPUT.PUT_LINE(' ' || vFileName ||
' status=' || vStatus ||
' wf_key=' || NVL(TO_CHAR(vWfKey), 'NULL'));
ELSIF vPrinted = cMaxPrint + 1 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vOnlyInDB - cMaxPrint) || ' more');
END IF;
END;
END LOOP;
CLOSE vRefCursor;
END IF;
IF vInBothNoKey > 0 THEN
vConfigsWithIssues := vConfigsWithIssues + 1;
DBMS_OUTPUT.PUT_LINE(' [D] Files in both but A_WORKFLOW_HISTORY_KEY still NULL:');
vPrinted := 0;
OPEN vRefCursor FOR
'SELECT sfr.SOURCE_FILE_NAME, sfr.PROCESSING_STATUS ' ||
'FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr ' ||
'WHERE sfr.A_SOURCE_FILE_CONFIG_KEY = :1 ' ||
' AND sfr.A_WORKFLOW_HISTORY_KEY IS NULL ' ||
' AND sfr.PROCESSING_STATUS IN (''VALIDATED'',''READY_FOR_INGESTION'',''INGESTED'',''ARCHIVED'',''ARCHIVED_AND_TRASHED'',''ARCHIVED_AND_PURGED'') ' ||
' AND EXISTS ( ' ||
' SELECT 1 FROM ' || vTableName || ' t ' ||
' WHERE t.file$name = sfr.SOURCE_FILE_NAME) ' ||
'ORDER BY sfr.SOURCE_FILE_NAME'
USING config_rec.A_SOURCE_FILE_CONFIG_KEY;
LOOP
DECLARE
vStatus VARCHAR2(50);
BEGIN
FETCH vRefCursor INTO vFileName, vStatus;
EXIT WHEN vRefCursor%NOTFOUND;
vPrinted := vPrinted + 1;
IF vPrinted <= cMaxPrint THEN
DBMS_OUTPUT.PUT_LINE(' ' || vFileName || ' status=' || vStatus);
ELSIF vPrinted = cMaxPrint + 1 THEN
DBMS_OUTPUT.PUT_LINE(' ... and ' || (vInBothNoKey - cMaxPrint) || ' more');
END IF;
END;
END LOOP;
CLOSE vRefCursor;
END IF;
DBMS_OUTPUT.PUT_LINE('');
-- Accumulate totals
vTotalOnlyInBucket := vTotalOnlyInBucket + vOnlyInBucket;
vTotalOnlyInDB := vTotalOnlyInDB + vOnlyInDB;
vTotalInBothWithKey := vTotalInBothWithKey + vInBothWithKey;
vTotalInBothNoKey := vTotalInBothNoKey + vInBothNoKey;
END LOOP;
DBMS_OUTPUT.PUT_LINE('============================================================================');
DBMS_OUTPUT.PUT_LINE('Grand Summary:');
DBMS_OUTPUT.PUT_LINE(' Configs with data checked: ' || vConfigsChecked);
DBMS_OUTPUT.PUT_LINE(' Configs with issues (B or D): ' || vConfigsWithIssues);
DBMS_OUTPUT.PUT_LINE(' [A] Files in bucket, not registered: ' || vTotalOnlyInBucket);
DBMS_OUTPUT.PUT_LINE(' [B] Registered, not in bucket: ' || vTotalOnlyInDB);
DBMS_OUTPUT.PUT_LINE(' [C] In both - A_WORKFLOW_HISTORY_KEY set: ' || vTotalInBothWithKey);
DBMS_OUTPUT.PUT_LINE(' [D] In both - A_WORKFLOW_HISTORY_KEY NULL: ' || vTotalInBothNoKey);
DBMS_OUTPUT.PUT_LINE('============================================================================');
IF vTotalOnlyInDB > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('WARNING [B]: ' || vTotalOnlyInDB || ' registered file(s) not found in ODS bucket.');
DBMS_OUTPUT.PUT_LINE(' These may have been moved to ARCHIVE or deleted from ODS.');
END IF;
IF vTotalInBothNoKey > 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('WARNING [D]: ' || vTotalInBothNoKey || ' file(s) present in both but A_WORKFLOW_HISTORY_KEY is still NULL.');
DBMS_OUTPUT.PUT_LINE(' ODS table rows for these files may have A_WORKFLOW_HISTORY_KEY = NULL.');
DBMS_OUTPUT.PUT_LINE(' Re-run step 09 after the ODS rows are populated by the pipeline.');
END IF;
IF vConfigsWithIssues = 0 THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('OK: No issues found. All registered files in ODS have A_WORKFLOW_HISTORY_KEY assigned.');
END IF;
EXCEPTION
WHEN OTHERS THEN
IF vRefCursor%ISOPEN THEN
CLOSE vRefCursor;
END IF;
DBMS_OUTPUT.PUT_LINE('ERROR: ' || SQLERRM);
RAISE;
END;
/
PROMPT
PROMPT Diagnosis complete.
PROMPT

View File

@@ -0,0 +1,43 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Rollback Step 91: Clear backfilled A_WORKFLOW_HISTORY_KEY values
-- ============================================================================
-- Purpose: Reset A_WORKFLOW_HISTORY_KEY to NULL for all records in
-- A_SOURCE_FILE_RECEIVED. Reverts the backfill performed by
-- 01_MARS_1409_POSTHOOK_update_existing_workflow_keys.sql.
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Note: Records populated by the new trigger (after MARS-1409 install) will also
-- be cleared. The trigger will repopulate them on next file processing.
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Clearing backfilled A_WORKFLOW_HISTORY_KEY values...
DECLARE
vCleared NUMBER := 0;
BEGIN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED
SET A_WORKFLOW_HISTORY_KEY = NULL
WHERE A_WORKFLOW_HISTORY_KEY IS NOT NULL;
vCleared := SQL%ROWCOUNT;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Cleared A_WORKFLOW_HISTORY_KEY for ' || vCleared || ' record(s)');
DBMS_OUTPUT.PUT_LINE('Rollback of backfill completed successfully');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('FATAL ERROR: ' || SQLERRM);
DBMS_OUTPUT.PUT_LINE('Transaction rolled back');
RAISE;
END;
/
PROMPT
PROMPT Workflow keys rollback completed!
PROMPT

View File

@@ -0,0 +1,60 @@
# MARS-1409-POSTHOOK: Backfill A_WORKFLOW_HISTORY_KEY for existing records
## Overview
Post-hook for MARS-1409. Backfills `A_WORKFLOW_HISTORY_KEY` in
`CT_MRDS.A_SOURCE_FILE_RECEIVED` for historical records that existed before
MARS-1409 was installed.
Matches records by `SOURCE_FILE_NAME` against `file$name` in the corresponding
ODS table (`ODS.<TABLE_ID>_ODS`) for each `INPUT` source configuration.
## Contents
| File | Description |
|------|-------------|
| `install_mars1409_posthook.sql` | Master installation script (SPOOL, ACCEPT, quit) |
| `rollback_mars1409_posthook.sql` | Master rollback script (SPOOL, ACCEPT, quit) |
| `01_MARS_1409_POSTHOOK_update_existing_workflow_keys.sql` | Backfill UPDATE script |
| `91_MARS_1409_POSTHOOK_rollback_workflow_keys.sql` | Clear backfilled values |
| `track_package_versions.sql` | Universal version tracking (no packages changed) |
| `verify_packages_version.sql` | Universal package verification |
| `README.md` | This file |
## Prerequisites
- MARS-1409 installed (`A_WORKFLOW_HISTORY_KEY` column must exist in `CT_MRDS.A_SOURCE_FILE_RECEIVED`)
- ODS tables populated with ingested data
- ADMIN user with access to CT_MRDS and ODS schemas
## Installation
```powershell
# Execute as ADMIN user
Get-Content "MARS_Packages/REL02_POST/MARS-1409-POSTHOOK/install_mars1409_posthook.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
Log file created automatically: `log/INSTALL_MARS_1409_POSTHOOK_<PDB>_<timestamp>.log`
## What it does
- Iterates all `INPUT` source configurations from `CT_MRDS.A_SOURCE_FILE_CONFIG`
- For each config, joins `A_SOURCE_FILE_RECEIVED` with `ODS.<TABLE_ID>_ODS` on `SOURCE_FILE_NAME = file$name`
- Updates `A_WORKFLOW_HISTORY_KEY` for records with statuses:
`VALIDATED`, `READY_FOR_INGESTION`, `INGESTED`, `ARCHIVED`, `ARCHIVED_AND_TRASHED`, `ARCHIVED_AND_PURGED`
- Skips configs with no NULL records or missing ODS tables
- Prints summary with counts per config and overall totals
## Rollback
```powershell
# Execute as ADMIN user
Get-Content "MARS_Packages/REL02_POST/MARS-1409-POSTHOOK/rollback_mars1409_posthook.sql" | sql "ADMIN/Cloudpass#34@ggmichalski_high"
```
Rollback clears all non-NULL `A_WORKFLOW_HISTORY_KEY` values from `A_SOURCE_FILE_RECEIVED`.
The trigger installed by MARS-1409 will repopulate new records automatically.
## Related
- MARS-1409: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED (main package)

View File

@@ -0,0 +1,117 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Master Installation Script
-- ============================================================================
-- Purpose: Post-hook for MARS-1409 - Backfill A_WORKFLOW_HISTORY_KEY for
-- existing A_SOURCE_FILE_RECEIVED records by joining with ODS tables.
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Prerequisites: MARS-1409 must be installed first (column must exist)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
SET ECHO OFF
-- Create log directory if it doesn't exist
host mkdir log 2>nul
-- Generate dynamic SPOOL filename with timestamp
var filename VARCHAR2(100)
BEGIN
:filename := 'log/INSTALL_MARS_1409_POSTHOOK_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Installation Starting
PROMPT ============================================================================
PROMPT Purpose: Backfill A_WORKFLOW_HISTORY_KEY for historical records
PROMPT in A_SOURCE_FILE_RECEIVED using matching ODS tables.
PROMPT
PROMPT This script will:
PROMPT - Update A_WORKFLOW_HISTORY_KEY for records with targeted PROCESSING_STATUS
PROMPT - Match records by SOURCE_FILE_NAME against file$name in ODS tables
PROMPT - Skip configs with no NULL records or missing ODS tables
PROMPT
PROMPT Prerequisite: MARS-1409 installed (A_WORKFLOW_HISTORY_KEY column exists)
PROMPT Expected Duration: 30-180 minutes (depends on data volume)
PROMPT ============================================================================
-- Confirm installation with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with installation, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20000, 'Installation aborted by user');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT PREREQUISITE CHECK: Verifying MARS-1409 objects
PROMPT ============================================================================
WHENEVER SQLERROR EXIT SQL.SQLCODE
DECLARE
vColCount NUMBER;
vTableCount NUMBER;
BEGIN
SELECT COUNT(*)
INTO vColCount
FROM ALL_TAB_COLUMNS
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_SOURCE_FILE_RECEIVED'
AND COLUMN_NAME = 'A_WORKFLOW_HISTORY_KEY';
IF vColCount = 0 THEN
RAISE_APPLICATION_ERROR(-20001,
'Prerequisite failed: CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY not found. Install MARS-1409 first (or do not run POSTHOOK after rollback).');
END IF;
SELECT COUNT(*)
INTO vTableCount
FROM ALL_TABLES
WHERE OWNER = 'CT_MRDS'
AND TABLE_NAME = 'A_WORKFLOW_HISTORY';
IF vTableCount = 0 THEN
RAISE_APPLICATION_ERROR(-20002,
'Prerequisite failed: CT_MRDS.A_WORKFLOW_HISTORY table not found.');
END IF;
DBMS_OUTPUT.PUT_LINE('OK: Prerequisites satisfied (MARS-1409 schema changes detected).');
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Backfill A_WORKFLOW_HISTORY_KEY for existing records
PROMPT ============================================================================
@@01_MARS_1409_POSTHOOK_update_existing_workflow_keys.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Diagnose workflow key status
PROMPT ============================================================================
@@02_MARS_1409_POSTHOOK_diagnose_workflow_key_status.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Installation Complete
PROMPT ============================================================================
PROMPT Final Status:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_end FROM DUAL;
PROMPT
PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,69 @@
-- ============================================================================
-- MARS-1409-POSTHOOK Master Rollback Script
-- ============================================================================
-- Purpose: Rollback MARS-1409-POSTHOOK - Clear backfilled A_WORKFLOW_HISTORY_KEY
-- values from A_SOURCE_FILE_RECEIVED.
-- Author: Grzegorz Michalski
-- Date: 2026-03-13
-- Note: This clears ALL non-NULL A_WORKFLOW_HISTORY_KEY values. The trigger
-- installed by MARS-1409 will repopulate them on next file processing.
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
SET ECHO OFF
-- Create log directory if it doesn't exist
host mkdir log 2>nul
-- Generate dynamic SPOOL filename with timestamp
var filename VARCHAR2(100)
BEGIN
:filename := 'log/ROLLBACK_MARS_1409_POSTHOOK_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Rollback Starting
PROMPT ============================================================================
PROMPT This will reverse all changes from MARS-1409-POSTHOOK installation.
PROMPT
PROMPT Rollback steps:
PROMPT 1. Clear A_WORKFLOW_HISTORY_KEY values from A_SOURCE_FILE_RECEIVED
PROMPT ============================================================================
-- Confirm rollback with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with rollback, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20000, 'Rollback aborted by user');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Clear backfilled A_WORKFLOW_HISTORY_KEY values
PROMPT ============================================================================
@@91_MARS_1409_POSTHOOK_rollback_workflow_keys.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409-POSTHOOK Rollback Complete
PROMPT ============================================================================
PROMPT Final Status:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_end FROM DUAL;
PROMPT
PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,26 @@
# MARS-1409 Package - Git Ignore Rules
# Standard exclusions for MARS deployment packages
# Confluence documentation (generated, not source)
confluence/
# Patches directory
patches/
# Log files from SPOOL operations
log/
# Test directories and files
test/
# Mock data scripts (development only)
mock_data/
# Temporary files
*.tmp
*.bak
*~
# Editor backups
.vscode/
.idea/

View File

@@ -0,0 +1,55 @@
-- ============================================================================
-- MARS-1409 Step 01: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
-- ============================================================================
-- Purpose: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
-- using ALTER TABLE to preserve existing data.
-- Prerequisites: A_SOURCE_FILE_CONFIG table exists (FK dependency)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Adding A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED ADD (A_WORKFLOW_HISTORY_KEY NUMBER)';
DBMS_OUTPUT.PUT_LINE('Column A_WORKFLOW_HISTORY_KEY added to A_SOURCE_FILE_RECEIVED.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -1430 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY already exists in A_SOURCE_FILE_RECEIVED.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Adding comment on A_WORKFLOW_HISTORY_KEY...
BEGIN
EXECUTE IMMEDIATE q'[COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY IS 'Direct link to workflow history - each file has exactly one workflow execution. Populated during VALIDATE_SOURCE_FILE_RECEIVED (MARS-1409)']';
DBMS_OUTPUT.PUT_LINE('Comment on A_WORKFLOW_HISTORY_KEY added.');
END;
/
PROMPT
PROMPT Renaming IS_KEEP_IN_TRASH to IS_KEPT_IN_TRASH in A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG RENAME COLUMN IS_KEEP_IN_TRASH TO IS_KEPT_IN_TRASH';
DBMS_OUTPUT.PUT_LINE('Column IS_KEEP_IN_TRASH renamed to IS_KEPT_IN_TRASH in A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_KEEP_IN_TRASH does not exist (already renamed or not present).');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Step 01 completed: A_WORKFLOW_HISTORY_KEY column added and IS_KEEP_IN_TRASH renamed to IS_KEPT_IN_TRASH.
PROMPT

View File

@@ -0,0 +1,104 @@
-- ============================================================================
-- MARS-1409 Step 10: Update A_TABLE_STAT, A_TABLE_STAT_HIST, A_SOURCE_FILE_CONFIG
-- ============================================================================
-- Purpose: Apply MARS-1409 table changes:
-- - A_TABLE_STAT and A_TABLE_STAT_HIST: DROP and recreate from new_version
-- (stats tables with no critical persistent data)
-- - A_SOURCE_FILE_CONFIG: ALTER TABLE ADD IS_WORKFLOW_SUCCESS_REQUIRED column
-- (preserves existing configuration data)
-- - A_SOURCE_FILE_RECEIVED: no changes in this step
-- Prerequisites: A_SOURCE table exists (FK parent of A_SOURCE_FILE_CONFIG)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT_HIST
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT_HIST...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT_HIST';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT_HIST dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT_HIST does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- ADD IS_WORKFLOW_SUCCESS_REQUIRED to A_SOURCE_FILE_CONFIG
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Adding IS_WORKFLOW_SUCCESS_REQUIRED to A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD ('
|| ' IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT ''Y'' NOT NULL '
|| ' CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN (''Y'', ''N''))'
|| ')';
DBMS_OUTPUT.PUT_LINE('Column IS_WORKFLOW_SUCCESS_REQUIRED added to A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -1430 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_WORKFLOW_SUCCESS_REQUIRED already exists in A_SOURCE_FILE_CONFIG.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Adding comment on IS_WORKFLOW_SUCCESS_REQUIRED...
BEGIN
EXECUTE IMMEDIATE q'[COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409']';
DBMS_OUTPUT.PUT_LINE('Comment on IS_WORKFLOW_SUCCESS_REQUIRED added.');
END;
/
-- ----------------------------------------------------------------------------
-- RECREATE A_TABLE_STAT and A_TABLE_STAT_HIST from new_version
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Creating A_TABLE_STAT (new_version)...
@@new_version/A_TABLE_STAT.sql
PROMPT
PROMPT Creating A_TABLE_STAT_HIST (new_version)...
@@new_version/A_TABLE_STAT_HIST.sql
PROMPT
PROMPT Step 10 completed: A_TABLE_STAT and A_TABLE_STAT_HIST recreated from new_version scripts,
PROMPT IS_WORKFLOW_SUCCESS_REQUIRED column added to A_SOURCE_FILE_CONFIG (MARS-1409).
PROMPT

View File

@@ -0,0 +1,21 @@
-- ============================================================================
-- MARS-1409 Installation Script
-- ============================================================================
-- Script: 01B_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
-- Description: Install ENV_MANAGER v3.3.0 package specification
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- Dependencies: 01A_MARS_1409_update_existing_workflow_keys.sql
-- ============================================================================
PROMPT ============================================================================
PROMPT STEP 1B: Update ENV_MANAGER package specification
PROMPT ============================================================================
PROMPT Installing ENV_MANAGER package specification...
@@new_version/ENV_MANAGER.pkg
PROMPT ENV_MANAGER specification installed
/

View File

@@ -0,0 +1,21 @@
-- ============================================================================
-- MARS-1409 Installation Script
-- ============================================================================
-- Script: 01C_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql
-- Description: Install ENV_MANAGER v3.3.0 package body
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- Dependencies: 01B_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
-- ============================================================================
PROMPT ============================================================================
PROMPT STEP 1C: Update ENV_MANAGER package body
PROMPT ============================================================================
PROMPT Installing ENV_MANAGER package body...
@@new_version/ENV_MANAGER.pkb
PROMPT ENV_MANAGER body installed
/

View File

@@ -0,0 +1,16 @@
-- ============================================================================
-- MARS-1409 Step 02: Install FILE_MANAGER Package Specification
-- ============================================================================
-- Purpose: Deploy updated FILE_MANAGER package specification
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT Installing FILE_MANAGER package specification...
-- Source from new_version directory
@@new_version/FILE_MANAGER.pkg
PROMPT FILE_MANAGER specification installed
PROMPT

View File

@@ -0,0 +1,16 @@
-- ============================================================================
-- MARS-1409 Step 03: Install FILE_MANAGER Package Body
-- ============================================================================
-- Purpose: Deploy updated FILE_MANAGER package body
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT Installing FILE_MANAGER package body...
-- Source from new_version directory
@@new_version/FILE_MANAGER.pkb
PROMPT FILE_MANAGER body installed
PROMPT

View File

@@ -0,0 +1,21 @@
-- ============================================================================
-- MARS-1409 Installation Script
-- ============================================================================
-- Script: 03A_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
-- Description: Install FILE_ARCHIVER package specification
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- Dependencies: 03_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
-- ============================================================================
PROMPT ============================================================================
PROMPT STEP 3A: Update FILE_ARCHIVER package specification
PROMPT ============================================================================
PROMPT Installing FILE_ARCHIVER package specification...
@@new_version/FILE_ARCHIVER.pkg
PROMPT FILE_ARCHIVER specification installed
/

View File

@@ -0,0 +1,21 @@
-- ============================================================================
-- MARS-1409 Installation Script
-- ============================================================================
-- Script: 03B_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
-- Description: Install FILE_ARCHIVER package body
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- Dependencies: 03A_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
-- ============================================================================
PROMPT ============================================================================
PROMPT STEP 3B: Update FILE_ARCHIVER package body
PROMPT ============================================================================
PROMPT Installing FILE_ARCHIVER package body...
@@new_version/FILE_ARCHIVER.pkb
PROMPT FILE_ARCHIVER body installed
/

View File

@@ -0,0 +1,28 @@
-- ============================================================================
-- MARS-1409 Step 08: Install A_WORKFLOW_HISTORY trigger
-- ============================================================================
-- Purpose: Update trigger to mark A_SOURCE_FILE_RECEIVED as INGESTED
-- when WORKFLOW_SUCCESSFUL is set to 'Y'
-- ============================================================================
PROMPT Installing A_WORKFLOW_HISTORY (new_version)...
@@new_version/A_WORKFLOW_HISTORY.sql
PROMPT
DECLARE
v_status VARCHAR2(20);
BEGIN
SELECT status INTO v_status
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'A_WORKFLOW_HISTORY'
AND object_type = 'TRIGGER';
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY status: ' || v_status);
IF v_status != 'VALID' THEN
RAISE_APPLICATION_ERROR(-20002, 'ERROR: A_WORKFLOW_HISTORY compiled with errors (status=' || v_status || '). Installation aborted.');
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE_APPLICATION_ERROR(-20001, 'ERROR: A_WORKFLOW_HISTORY not found after installation');
END;
/

View File

@@ -0,0 +1,147 @@
-- ============================================================================
-- MARS-1409 Step 04: Verify Installation
-- ============================================================================
-- Purpose: Verify successful installation of MARS-1409 changes
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
PROMPT
PROMPT ============================================================================
PROMPT Verifying MARS-1409 Installation
PROMPT ============================================================================
-- Check if column was added
PROMPT
PROMPT 1. Checking A_WORKFLOW_HISTORY_KEY column existence...
SELECT
column_name,
data_type,
data_length,
nullable
FROM all_tab_columns
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_RECEIVED'
AND column_name = 'A_WORKFLOW_HISTORY_KEY';
-- Check foreign key constraint
PROMPT
PROMPT 2. Checking foreign key constraint...
SELECT
constraint_name,
constraint_type,
r_constraint_name,
status
FROM all_constraints
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_RECEIVED'
AND constraint_type = 'R'
AND constraint_name LIKE '%WORKFLOW_HISTORY%';
-- Check package compilation status
PROMPT
PROMPT 3. Checking FILE_MANAGER package compilation...
SELECT
object_name,
object_type,
status,
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'FILE_MANAGER'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
-- Check for compilation errors
PROMPT
PROMPT 4. Checking for compilation errors...
SELECT
name,
type,
line,
position,
text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name = 'FILE_MANAGER'
ORDER BY type, line, position;
-- Check FILE_ARCHIVER compilation status
PROMPT
PROMPT 5. Checking FILE_ARCHIVER package compilation...
SELECT
object_name,
object_type,
status,
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'FILE_ARCHIVER'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
SELECT
name,
type,
line,
position,
text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name = 'FILE_ARCHIVER'
ORDER BY type, line, position;
-- Check DATA_EXPORTER compilation status
PROMPT
PROMPT 5C. Checking DATA_EXPORTER package compilation...
SELECT
object_name,
object_type,
status,
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'DATA_EXPORTER'
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_type;
SELECT
name,
type,
line,
position,
text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name = 'DATA_EXPORTER'
ORDER BY type, line, position;
-- Check trigger status
PROMPT
PROMPT 5B. Checking A_WORKFLOW_HISTORY trigger...
SELECT
trigger_name,
trigger_type,
triggering_event,
status
FROM all_triggers
WHERE owner = 'CT_MRDS'
AND trigger_name = 'A_WORKFLOW_HISTORY';
-- Verify package versions
PROMPT
PROMPT 6. Verifying package versions...
SELECT 'FILE_MANAGER' AS PACKAGE_NAME, CT_MRDS.FILE_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'ENV_MANAGER' AS PACKAGE_NAME, CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'DATA_EXPORTER' AS PACKAGE_NAME, CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
PROMPT
PROMPT ============================================================================
PROMPT Verification Complete
PROMPT ============================================================================

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Step 11: Install DATA_EXPORTER Package Specification
-- ============================================================================
-- Script: 11_MARS_1409_install_CT_MRDS_DATA_EXPORTER_SPEC.sql
-- Description: Install DATA_EXPORTER package specification (new version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Installing DATA_EXPORTER package specification...
PROMPT ============================================================================
@@new_version/DATA_EXPORTER.pkg
PROMPT DATA_EXPORTER specification installed
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Step 12: Install DATA_EXPORTER Package Body
-- ============================================================================
-- Script: 12_MARS_1409_install_CT_MRDS_DATA_EXPORTER_BODY.sql
-- Description: Install DATA_EXPORTER package body (new version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Installing DATA_EXPORTER package body...
PROMPT ============================================================================
@@new_version/DATA_EXPORTER.pkb
PROMPT DATA_EXPORTER body installed
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 83_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_SPEC.sql
-- Description: Restore DATA_EXPORTER package specification (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring DATA_EXPORTER package specification...
PROMPT ============================================================================
@@rollback_version/DATA_EXPORTER.pkg
PROMPT DATA_EXPORTER specification restored
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 84_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_BODY.sql
-- Description: Restore DATA_EXPORTER package body (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring DATA_EXPORTER package body...
PROMPT ============================================================================
@@rollback_version/DATA_EXPORTER.pkb
PROMPT DATA_EXPORTER body restored
/

View File

@@ -0,0 +1,90 @@
-- ============================================================================
-- MARS-1409 Rollback 90: Verify Rollback
-- ============================================================================
-- Purpose: Verify successful rollback of MARS-1409 changes
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT ============================================================================
PROMPT Verifying MARS-1409 Rollback
PROMPT ============================================================================
-- Check if column was removed
PROMPT
PROMPT 1. Verifying A_WORKFLOW_HISTORY_KEY column removal...
DECLARE
v_count NUMBER;
BEGIN
SELECT COUNT(*)
INTO v_count
FROM all_tab_columns
WHERE owner = 'CT_MRDS'
AND table_name = 'A_SOURCE_FILE_RECEIVED'
AND column_name = 'A_WORKFLOW_HISTORY_KEY';
IF v_count = 0 THEN
DBMS_OUTPUT.PUT_LINE('SUCCESS: A_WORKFLOW_HISTORY_KEY column removed');
ELSE
DBMS_OUTPUT.PUT_LINE('WARNING: A_WORKFLOW_HISTORY_KEY column still exists');
END IF;
END;
/
-- Check trigger was restored
PROMPT
PROMPT 1B. Checking A_WORKFLOW_HISTORY trigger status...
DECLARE
v_status VARCHAR2(20);
BEGIN
SELECT status INTO v_status
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'A_WORKFLOW_HISTORY'
AND object_type = 'TRIGGER';
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY status: ' || v_status);
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('WARNING: A_WORKFLOW_HISTORY not found');
END;
/
-- Check compilation status
PROMPT
PROMPT 2. Checking package compilation status...
SELECT
object_name,
object_type,
status,
last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_name, object_type;
-- Check for compilation errors
PROMPT
PROMPT 3. Checking for compilation errors...
SELECT name, type, line, position, text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
ORDER BY name, type, line, position;
-- Verify package versions
PROMPT
PROMPT 4. Verifying package versions after rollback...
SELECT 'FILE_MANAGER' AS PACKAGE_NAME, CT_MRDS.FILE_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'ENV_MANAGER' AS PACKAGE_NAME, CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'FILE_ARCHIVER' AS PACKAGE_NAME, CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL
UNION ALL
SELECT 'DATA_EXPORTER' AS PACKAGE_NAME, CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
PROMPT
PROMPT ============================================================================
PROMPT Rollback Verification Complete
PROMPT ============================================================================

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 93B_MARS_1409_rollback_FILE_ARCHIVER_SPEC.sql
-- Description: Restore FILE_ARCHIVER package specification (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring FILE_ARCHIVER package specification...
PROMPT ============================================================================
@@rollback_version/FILE_ARCHIVER.pkg
PROMPT FILE_ARCHIVER specification restored
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 93A_MARS_1409_rollback_FILE_ARCHIVER_BODY.sql
-- Description: Restore FILE_ARCHIVER package body (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring FILE_ARCHIVER package body...
PROMPT ============================================================================
@@rollback_version/FILE_ARCHIVER.pkb
PROMPT FILE_ARCHIVER body restored
/

View File

@@ -0,0 +1,16 @@
-- ============================================================================
-- MARS-1409 Rollback 92: Restore FILE_MANAGER Package Specification
-- ============================================================================
-- Purpose: Restore previous FILE_MANAGER package specification
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT Restoring FILE_MANAGER package specification...
-- Source from rollback_version directory
@@rollback_version/FILE_MANAGER.pkg
PROMPT FILE_MANAGER specification restored
PROMPT

View File

@@ -0,0 +1,16 @@
-- ============================================================================
-- MARS-1409 Rollback 93: Restore FILE_MANAGER Package Body
-- ============================================================================
-- Purpose: Restore previous FILE_MANAGER package body
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT Restoring FILE_MANAGER package body...
-- Source from rollback_version directory
@@rollback_version/FILE_MANAGER.pkb
PROMPT FILE_MANAGER body restored
PROMPT

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 92A_MARS_1409_rollback_ENV_MANAGER_SPEC.sql
-- Description: Restore ENV_MANAGER v3.2.0 package specification (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring ENV_MANAGER package specification...
PROMPT ============================================================================
@@rollback_version/ENV_MANAGER.pkg
PROMPT ENV_MANAGER specification restored
/

View File

@@ -0,0 +1,18 @@
-- ============================================================================
-- MARS-1409 Rollback Script
-- ============================================================================
-- Script: 92B_MARS_1409_rollback_ENV_MANAGER_BODY.sql
-- Description: Restore ENV_MANAGER v3.2.0 package body (previous version)
-- Author: Grzegorz Michalski
-- Date: 2026-02-27
-- ============================================================================
PROMPT ============================================================================
PROMPT Restoring ENV_MANAGER package body...
PROMPT ============================================================================
@@rollback_version/ENV_MANAGER.pkb
PROMPT ENV_MANAGER body restored
/

View File

@@ -0,0 +1,26 @@
-- ============================================================================
-- MARS-1409 Rollback 93C: Restore A_WORKFLOW_HISTORY trigger
-- ============================================================================
-- Purpose: Restore trigger to pre-MARS-1409 state
-- Removes INGESTED status update logic from A_SOURCE_FILE_RECEIVED
-- ============================================================================
PROMPT Restoring trigger A_WORKFLOW_HISTORY (rollback_version)...
@@rollback_version/A_WORKFLOW_HISTORY.sql
PROMPT
DECLARE
v_status VARCHAR2(20);
BEGIN
-- After rollback the trigger is restored under its original name: a_workflow_history
SELECT status INTO v_status
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name = 'A_WORKFLOW_HISTORY'
AND object_type = 'TRIGGER';
DBMS_OUTPUT.PUT_LINE('A_WORKFLOW_HISTORY (original trigger) restored, status: ' || v_status);
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE_APPLICATION_ERROR(-20001, 'ERROR: A_WORKFLOW_HISTORY not found after rollback');
END;
/

View File

@@ -0,0 +1,92 @@
-- ============================================================================
-- MARS-1409 Rollback Step 100: Restore A_TABLE_STAT, A_TABLE_STAT_HIST,
-- remove IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG
-- ============================================================================
-- Purpose: Rollback of step 10:
-- - A_TABLE_STAT and A_TABLE_STAT_HIST: DROP and recreate from rollback_version
-- - A_SOURCE_FILE_CONFIG: ALTER TABLE DROP COLUMN IS_WORKFLOW_SUCCESS_REQUIRED
-- (preserves existing configuration data)
-- - A_SOURCE_FILE_RECEIVED: no changes in this step
-- Prerequisites: Step 10 was applied
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP A_TABLE_STAT_HIST
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping A_TABLE_STAT_HIST...
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE CT_MRDS.A_TABLE_STAT_HIST';
DBMS_OUTPUT.PUT_LINE('Table A_TABLE_STAT_HIST dropped.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -942 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Table A_TABLE_STAT_HIST does not exist.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- DROP IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Dropping IS_WORKFLOW_SUCCESS_REQUIRED from A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP COLUMN IS_WORKFLOW_SUCCESS_REQUIRED';
DBMS_OUTPUT.PUT_LINE('Column IS_WORKFLOW_SUCCESS_REQUIRED dropped from A_SOURCE_FILE_CONFIG (CHK_IS_WORKFLOW_SUCCESS_REQUIRED constraint dropped automatically).');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_WORKFLOW_SUCCESS_REQUIRED does not exist in A_SOURCE_FILE_CONFIG.');
ELSE
RAISE;
END IF;
END;
/
-- ----------------------------------------------------------------------------
-- RECREATE A_TABLE_STAT and A_TABLE_STAT_HIST from rollback_version
-- ----------------------------------------------------------------------------
PROMPT
PROMPT Creating A_TABLE_STAT (rollback_version - pre-MARS-1409 structure)...
@@rollback_version/A_TABLE_STAT.sql
PROMPT
PROMPT Creating A_TABLE_STAT_HIST (rollback_version - pre-MARS-1409 structure)...
@@rollback_version/A_TABLE_STAT_HIST.sql
PROMPT
PROMPT Rollback Step 100 completed: A_TABLE_STAT and A_TABLE_STAT_HIST restored to pre-MARS-1409
PROMPT structure, IS_WORKFLOW_SUCCESS_REQUIRED column removed from A_SOURCE_FILE_CONFIG.
PROMPT

View File

@@ -0,0 +1,46 @@
-- ============================================================================
-- MARS-1409 Rollback 99: Remove A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- ============================================================================
-- Purpose: Drop A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
-- using ALTER TABLE to preserve existing data.
-- Prerequisites: A_SOURCE_FILE_CONFIG table exists (FK dependency)
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
WHENEVER SQLERROR EXIT SQL.SQLCODE
PROMPT
PROMPT Dropping A_WORKFLOW_HISTORY_KEY from A_SOURCE_FILE_RECEIVED...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED DROP COLUMN A_WORKFLOW_HISTORY_KEY';
DBMS_OUTPUT.PUT_LINE('Column A_WORKFLOW_HISTORY_KEY dropped from A_SOURCE_FILE_RECEIVED.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column A_WORKFLOW_HISTORY_KEY does not exist in A_SOURCE_FILE_RECEIVED.');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Renaming IS_KEPT_IN_TRASH back to IS_KEEP_IN_TRASH in A_SOURCE_FILE_CONFIG...
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG RENAME COLUMN IS_KEPT_IN_TRASH TO IS_KEEP_IN_TRASH';
DBMS_OUTPUT.PUT_LINE('Column IS_KEPT_IN_TRASH renamed back to IS_KEEP_IN_TRASH in A_SOURCE_FILE_CONFIG.');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -904 THEN
DBMS_OUTPUT.PUT_LINE('SKIP: Column IS_KEPT_IN_TRASH does not exist (already renamed back or not present).');
ELSE
RAISE;
END IF;
END;
/
PROMPT
PROMPT Rollback 99 completed: A_WORKFLOW_HISTORY_KEY removed and IS_KEPT_IN_TRASH renamed back to IS_KEEP_IN_TRASH.
PROMPT

View File

@@ -0,0 +1,210 @@
# MARS-1409: Add A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED
## Overview
Package for adding A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED table and updating FILE_MANAGER package to populate this value during file registration.
## Purpose
Direct tracking of workflow history keys in file registration for improved diagnostics and simplified archival queries.
## Structure
```
MARS-1409/
├── .gitignore
├── install_mars1409.sql # Master installation script (8 steps)
├── rollback_mars1409.sql # Master rollback script (5 steps)
├── verify_packages_version.sql # Version verification
├── track_package_versions.sql # Version tracking
├── 01_MARS_1409_add_workflow_history_key_column.sql
├── 01A_MARS_1409_update_existing_workflow_keys.sql # Update existing records
├── 01B_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql # ENV_MANAGER v3.3.0 spec
├── 01C_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql # ENV_MANAGER v3.3.0 body
├── 02_MARS_1409_install_CT_MRDS_FILE_MANAGER_SPEC.sql
├── 03_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
├── 04_MARS_1409_verify_installation.sql
├── 90_MARS_1409_verify_rollback.sql
├── 91_MARS_1409_rollback_workflow_history_key_column.sql
├── 91A_MARS_1409_rollback_existing_workflow_keys.sql # Clear existing values
├── 92_MARS_1409_rollback_FILE_MANAGER_SPEC.sql
├── 92A_MARS_1409_rollback_ENV_MANAGER_SPEC.sql # ENV_MANAGER v3.2.0 spec
├── 92B_MARS_1409_rollback_ENV_MANAGER_BODY.sql # ENV_MANAGER v3.2.0 body
├── 93_MARS_1409_rollback_FILE_MANAGER_BODY.sql
├── new_version/ # Updated packages
│ ├── A_SOURCE_FILE_RECEIVED.sql # Updated table definition
│ ├── ENV_MANAGER.pkg # v3.3.0
│ ├── ENV_MANAGER.pkb # v3.3.0
│ ├── FILE_MANAGER.pkg # v3.6.0
│ ├── FILE_MANAGER.pkb # v3.6.0
│ └── FILE_ARCHIVER.pkb # Current version
├── rollback_version/ # Previous versions
│ ├── A_SOURCE_FILE_RECEIVED.sql # Original table definition
│ ├── ENV_MANAGER.pkg # v3.2.0
│ ├── ENV_MANAGER.pkb # v3.2.0
│ ├── FILE_MANAGER.pkg # v3.5.1
│ ├── FILE_MANAGER.pkb # v3.5.1
│ └── FILE_ARCHIVER.pkb # Previous version
└── log/ # Installation logs
```
## Status
**TESTED & VERIFIED** - Installation and rollback validated in DEV environment (2026-02-27)
- ✅ Installation: SUCCESS (8 steps)
- ✅ Rollback: SUCCESS (5 steps)
- ✅ Package compilation: ALL VALID
- ✅ Version tracking: Working correctly
- ⚠️ Prerequisite: MARS-828 column rename must be applied first
## Implementation Details
### Database Changes
- Added `A_WORKFLOW_HISTORY_KEY NUMBER` column to `CT_MRDS.A_SOURCE_FILE_RECEIVED`
- No FK constraint (workflow history record created later in processing)
- Column populated during VALIDATE_SOURCE_FILE_RECEIVED procedure
- **Migration script** (01A): Updates A_WORKFLOW_HISTORY_KEY for existing records by extracting values from ODS tables
### Package Changes
- **ENV_MANAGER v3.3.0**: Added error codes CODE_WORKFLOW_KEY_NULL (-20035) and CODE_MULTIPLE_WORKFLOW_KEYS (-20036)
- **FILE_MANAGER v3.6.0**: Enhanced VALIDATE_SOURCE_FILE_RECEIVED to extract and validate A_WORKFLOW_HISTORY_KEY from external tables
### Validation Rules
- **NULL values**: Fatal error - file must contain A_WORKFLOW_HISTORY_KEY
- **Multiple values**: Fatal error - each file must have exactly one workflow execution key
- **Single value**: Value extracted and stored in A_SOURCE_FILE_RECEIVED
### Migration of Existing Data (01A Script)
The installation includes automatic migration of A_WORKFLOW_HISTORY_KEY for existing records:
- **Scope**: Updates records with status VALIDATED, READY_FOR_INGESTION, INGESTED, ARCHIVED*
- **Method**: Extracts A_WORKFLOW_HISTORY_KEY from ODS tables by matching file$name with SOURCE_FILE_NAME
- **Safety**: Uses TRY-CATCH for each configuration - continues if ODS table doesn't exist
- **Logging**: Detailed output showing success/failure for each configuration
- **Expected behavior**: Some records may remain NULL if:
- Files not yet ingested into ODS tables
- Files with status RECEIVED or VALIDATION_FAILED
- ODS tables don't exist or have different structure
- These NULL records will be populated when files are reprocessed
## Next Steps
1. **Test installation** in DEV environment:
```sql
@install_mars1409.sql
```
2. **Review migration results**: Check how many existing records were updated
3. **Validate new files**: Test with sample files containing A_WORKFLOW_HISTORY_KEY
4. **Test rollback** procedure to ensure clean restoration
5. **Deploy to higher environments** after successful DEV validation
## Installation Flow
```
install_mars1409.sql (MASTER - 8 steps)
├─ 01: Add A_WORKFLOW_HISTORY_KEY column (DDL)
├─ 01A: Update existing records (Migration)
├─ 01B: Install ENV_MANAGER.pkg (v3.3.0)
├─ 01C: Install ENV_MANAGER.pkb (v3.3.0)
├─ 02: Install FILE_MANAGER.pkg (v3.6.0)
├─ 03: Install FILE_MANAGER.pkb (v3.6.0)
├─ 04: Verify installation
└─ 05: Track package versions
```
## Rollback Flow
```
rollback_mars1409.sql (MASTER - 5 steps)
├─ 01: Restore FILE_MANAGER.pkb (v3.5.1)
├─ 02: Restore FILE_MANAGER.pkg (v3.5.1)
├─ 02A: Restore ENV_MANAGER.pkb (v3.2.0)
├─ 02B: Restore ENV_MANAGER.pkg (v3.2.0)
├─ 03: Clear A_WORKFLOW_HISTORY_KEY values
├─ 04: Drop A_WORKFLOW_HISTORY_KEY column
└─ 05: Verify rollback
```
## Post-Installation Verification
### Check migration results:
```sql
-- Count updated records
SELECT
CASE WHEN A_WORKFLOW_HISTORY_KEY IS NOT NULL THEN 'POPULATED' ELSE 'NULL' END as STATUS,
COUNT(*) as RECORD_COUNT
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
GROUP BY CASE WHEN A_WORKFLOW_HISTORY_KEY IS NOT NULL THEN 'POPULATED' ELSE 'NULL' END;
-- Check by configuration
SELECT
sfc.A_SOURCE_KEY,
sfc.SOURCE_FILE_ID,
sfc.TABLE_ID,
COUNT(*) as TOTAL_FILES,
SUM(CASE WHEN sfr.A_WORKFLOW_HISTORY_KEY IS NOT NULL THEN 1 ELSE 0 END) as POPULATED,
SUM(CASE WHEN sfr.A_WORKFLOW_HISTORY_KEY IS NULL THEN 1 ELSE 0 END) as NULL_COUNT
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED sfr
JOIN CT_MRDS.A_SOURCE_FILE_CONFIG sfc ON sfr.A_SOURCE_FILE_CONFIG_KEY = sfc.A_SOURCE_FILE_CONFIG_KEY
GROUP BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID, sfc.TABLE_ID
ORDER BY sfc.A_SOURCE_KEY, sfc.SOURCE_FILE_ID;
```
### Test new file processing:
```sql
-- Process a test file (will populate A_WORKFLOW_HISTORY_KEY automatically)
EXEC FILE_MANAGER.PROCESS_SOURCE_FILE('INBOX/TEST/TEST_FILE/TEST_TABLE/test_file.csv');
-- Verify A_WORKFLOW_HISTORY_KEY was populated
SELECT A_SOURCE_FILE_RECEIVED_KEY, SOURCE_FILE_NAME, A_WORKFLOW_HISTORY_KEY, PROCESSING_STATUS
FROM CT_MRDS.A_SOURCE_FILE_RECEIVED
WHERE SOURCE_FILE_NAME LIKE '%test_file.csv%';
```
## Related Tickets
- Based on MARS-828 package structure
- Supports FILE_ARCHIVER workflow tracking improvements
## Test Results (2026-02-27)
### ✅ Installation Test
**Environment**: DEV (ggmichalski_high)
**Status**: SUCCESS
**Installation Steps**:
1. Step 1: Add A_WORKFLOW_HISTORY_KEY column - ✅ SUCCESS
2. Step 1A: Migrate existing records - ✅ SUCCESS (0 updated, 47 NULL expected)
3. Step 1B: Install ENV_MANAGER v3.3.0 specification - ✅ SUCCESS
4. Step 1C: Install ENV_MANAGER v3.3.0 body - ✅ SUCCESS
5. Step 2: Install FILE_MANAGER v3.6.0 specification - ✅ SUCCESS
6. Step 3: Install FILE_MANAGER v3.6.0 body - ✅ SUCCESS
7. Step 4: Verification - ✅ SUCCESS
8. Step 5: Version tracking - ✅ SUCCESS
**Post-Installation State**:
- FILE_MANAGER version: 3.6.0
- ENV_MANAGER version: 3.3.0
- A_WORKFLOW_HISTORY_KEY column: EXISTS (NUMBER, NULLABLE)
- Package compilation: ALL VALID
- Migration results: 0 records updated (expected - no data in ODS tables)
### ✅ Rollback Test
**Status**: SUCCESS
**Rollback Steps**:
1. Step 1: Restore FILE_MANAGER body v3.5.1 - ✅ SUCCESS
2. Step 2: Restore FILE_MANAGER spec v3.5.1 - ✅ SUCCESS
3. Step 2A: Restore ENV_MANAGER body v3.2.0 - ✅ SUCCESS
4. Step 2B: Restore ENV_MANAGER spec v3.2.0 - ✅ SUCCESS
5. Step 3: Clear A_WORKFLOW_HISTORY_KEY values - ✅ SUCCESS (0 cleared, 47 already NULL)
6. Step 4: Drop A_WORKFLOW_HISTORY_KEY column - ✅ SUCCESS
7. Step 5: Verification - ✅ SUCCESS
**Post-Rollback State**:
- FILE_MANAGER version: 3.5.1 (restored)
- ENV_MANAGER version: 3.2.0 (restored)
- A_WORKFLOW_HISTORY_KEY column: REMOVED
- Package compilation: ALL VALID
**Critical Findings**:
- ⚠️ **Prerequisite**: MARS-828 column rename must be applied first (ARCHIVE_THRESHOLD_DAYS)
- ⚠️ **Database State**: rollback_version packages require MARS-828 naming conventions
-**Solution**: Applied MARS-828 01a script before testing - now works correctly
**Package Ready for**:
- ✅ DEV deployment (tested successfully)
- ✅ QA deployment (after DEV validation)
- ⏳ PROD deployment (pending higher environment validation)

View File

@@ -0,0 +1,147 @@
-- ============================================================================
-- MARS-1409 Master Installation Script
-- ============================================================================
-- Author: Grzegorz Michalski
-- Purpose: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED table
-- Target Schema: CT_MRDS
-- Estimated Time: 1-2 minutes
-- Prerequisites: FILE_MANAGER v3.x, ENV_MANAGER v3.x, ADMIN privileges
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
SET ECHO OFF
-- Create log directory if it doesn't exist
host mkdir log 2>nul
-- Generate dynamic SPOOL filename with timestamp
var filename VARCHAR2(100)
BEGIN
:filename := 'log/INSTALL_MARS_1409_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409 Installation Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_MANAGER v3.X.X
PROMPT Change: Add A_WORKFLOW_HISTORY_KEY to A_SOURCE_FILE_RECEIVED; add ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED and WORKFLOW_SUCCESS_* columns to A_TABLE_STAT/HIST
PROMPT Purpose: Direct tracking of workflow history keys in file registration; self-documenting statistics records; separate total vs workflow-success statistics
PROMPT Steps: 14 (DDL x2, ENV_MANAGER Update, FILE_MANAGER Update, FILE_ARCHIVER Update, DATA_EXPORTER Update, Trigger Update, Verification, Tracking, Version Verification)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_start FROM DUAL;
PROMPT ============================================================================
-- Confirm installation with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with installation, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20000, 'Installation aborted by user');
END IF;
END;
/
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Add A_WORKFLOW_HISTORY_KEY column to A_SOURCE_FILE_RECEIVED
PROMPT ============================================================================
@@01_MARS_1409_add_workflow_history_key_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Add ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED and WORKFLOW_SUCCESS_FILE_COUNT/ROW_COUNT/SIZE columns to A_TABLE_STAT and A_TABLE_STAT_HIST
PROMPT ============================================================================
@@02_MARS_1409_add_archival_strategy_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 3: Update ENV_MANAGER package specification
PROMPT ============================================================================
@@03_MARS_1409_install_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Update ENV_MANAGER package body
PROMPT ============================================================================
@@04_MARS_1409_install_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Update FILE_MANAGER package specification
PROMPT ============================================================================
@@05_MARS_1409_install_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Update FILE_MANAGER package body
PROMPT ============================================================================
@@06_MARS_1409_install_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Update FILE_ARCHIVER package specification
PROMPT ============================================================================
@@07_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Update FILE_ARCHIVER package body
PROMPT ============================================================================
@@08_MARS_1409_install_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Install DATA_EXPORTER package specification
PROMPT ============================================================================
@@11_MARS_1409_install_CT_MRDS_DATA_EXPORTER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Install DATA_EXPORTER package body
PROMPT ============================================================================
@@12_MARS_1409_install_CT_MRDS_DATA_EXPORTER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Update A_WORKFLOW_HISTORY trigger
PROMPT ============================================================================
@@09_MARS_1409_install_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 12: Verify installation
PROMPT ============================================================================
@@10_MARS_1409_verify_installation.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 13: Track package versions
PROMPT ============================================================================
@@track_package_versions.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 14: Verify package versions
PROMPT ============================================================================
@@verify_packages_version.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409 Installation Complete
PROMPT ============================================================================
PROMPT Final Status:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS install_end FROM DUAL;
PROMPT
PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,106 @@
-- ====================================================================
-- A_SOURCE_FILE_CONFIG Table
-- ====================================================================
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- MARS-1409: Added IS_WORKFLOW_SUCCESS_REQUIRED flag for workflow bypass
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_KEY VARCHAR2(30) NOT NULL ENABLE,
SOURCE_FILE_TYPE VARCHAR2(200), -- Can be 'INPUT' or 'CONTAINER' or 'LOAD_CONFIG'
SOURCE_FILE_ID VARCHAR2(200),
SOURCE_FILE_DESC VARCHAR2(2000),
SOURCE_FILE_NAME_PATTERN VARCHAR2(200),
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEPT_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1) DEFAULT 'Y' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEPT_IN_TRASH CHECK (IS_KEPT_IN_TRASH IN ('Y', 'N')),
CONSTRAINT CHK_IS_WORKFLOW_SUCCESS_REQUIRED CHECK (IS_WORKFLOW_SUCCESS_REQUIRED IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_CONFIG_UQ1 UNIQUE(SOURCE_FILE_TYPE, SOURCE_FILE_ID, TABLE_ID)
) TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED IS
'Y=Archivization requires WORKFLOW_SUCCESSFUL=Y (standard DBT flow), N=Archive regardless of workflow completion status (bypass for manual/non-DBT sources). Added MARS-1409';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,74 @@
-- ====================================================================
-- A_SOURCE_FILE_RECEIVED Table
-- ====================================================================
-- Purpose: Track received files and their processing status
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED (
A_SOURCE_FILE_RECEIVED_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
SOURCE_FILE_NAME VARCHAR2(1000) NOT NULL,
CHECKSUM VARCHAR2(128),
CREATED TIMESTAMP(6) WITH TIME ZONE,
BYTES NUMBER,
RECEPTION_DATE DATE NOT NULL,
PROCESSING_STATUS VARCHAR2(200),
EXTERNAL_TABLE_NAME VARCHAR2(200),
PARTITION_YEAR VARCHAR2(4),
PARTITION_MONTH VARCHAR2(2),
ARCH_PATH VARCHAR2(1000),
PROCESS_NAME VARCHAR2(200),
A_WORKFLOW_HISTORY_KEY NUMBER,
CONSTRAINT A_SOURCE_FILE_RECEIVED_PK PRIMARY KEY (A_SOURCE_FILE_RECEIVED_KEY),
CONSTRAINT ASFR_A_SOURCE_FILE_CONFIG_KEY_FK FOREIGN KEY(A_SOURCE_FILE_CONFIG_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_RECEIVED_CHK CHECK (PROCESSING_STATUS IN ('RECEIVED', 'VALIDATION_FAILED', 'VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED'))
) TABLESPACE "DATA";
-- Unique index for file identification (workaround for TIMESTAMP WITH TIMEZONE constraint limitation)
CREATE UNIQUE INDEX CT_MRDS.A_SOURCE_FILE_RECEIVED_UK1
ON CT_MRDS.A_SOURCE_FILE_RECEIVED(CHECKSUM, CREATED, BYTES);
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY IS
'Primary key - unique identifier for received file record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY IS
'Foreign key to A_SOURCE_FILE_CONFIG - links file to its configuration and processing rules';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME IS
'Full object name/path of the received file in OCI Object Storage (includes INBOX prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CHECKSUM IS
'MD5 checksum of file content for integrity verification and duplicate detection';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CREATED IS
'Timestamp with timezone when file was created/uploaded to Object Storage (from DBMS_CLOUD.LIST_OBJECTS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.BYTES IS
'File size in bytes';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE IS
'Date when file was registered in the system (extracted from CREATED timestamp)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS IS
'Current processing status: RECEIVED → VALIDATED (or VALIDATION_FAILED if errors) → READY_FOR_INGESTION → INGESTED → ARCHIVED → ARCHIVED_AND_TRASHED → ARCHIVED_AND_PURGED';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME IS
'Name of temporary external table created for file validation (dropped after validation)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_YEAR IS
'Year partition value (YYYY format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_MONTH IS
'Month partition value (MM format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.ARCH_PATH IS
'Archive directory prefix in ARCHIVE bucket containing archived Parquet files (supports multiple files from parallel DBMS_CLOUD.EXPORT_DATA)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESS_NAME IS
'Name of the process or DBT model that ingested this file (populated during ingestion workflow)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_WORKFLOW_HISTORY_KEY IS
'Direct link to workflow history - each file has exactly one workflow execution. Populated during VALIDATE_SOURCE_FILE_RECEIVED (MARS-1409)';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_RECEIVED TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,56 @@
-- ====================================================================
-- A_TABLE_STAT Table
-- ====================================================================
-- Purpose: Store current table statistics and archival thresholds
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT (
-- === Identity / metadata ===
A_TABLE_STAT_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
-- === Archival configuration snapshot (values at gather time) ===
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
-- === Total statistics (all files, no workflow filter) ===
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
TOTAL_SIZE NUMBER(38,0),
-- === Over-archival-threshold statistics ===
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_TOTAL_SIZE NUMBER(38,0),
-- === Workflow-success statistics (WORKFLOW_SUCCESSFUL='Y' files only) ===
WORKFLOW_SUCCESS_FILE_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_ROW_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_TOTAL_SIZE NUMBER(38,0),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
-- Identity / metadata
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.A_TABLE_STAT_KEY IS 'Primary key, populated from A_TABLE_STAT_KEY_SEQ sequence.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.A_SOURCE_FILE_CONFIG_KEY IS 'Foreign key to A_SOURCE_FILE_CONFIG; one current-stat row per config entry (unique constraint A_TABLE_STAT_UK1).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.TABLE_NAME IS 'Fully qualified ODS external table name (SCHEMA.TABLE) for which statistics were gathered.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.CREATED IS 'Timestamp when the statistics were gathered by FILE_ARCHIVER.GATHER_TABLE_STAT.';
-- Archival configuration snapshot
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCHIVAL_STRATEGY IS 'Archival strategy active when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months copied from A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ARCH_THRESHOLD_DAYS IS 'Archive threshold in days copied from A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS. Used by THRESHOLD_BASED and HYBRID strategies.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Snapshot of A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED at gather time. Y = OVER_ARCH_THRESOLD counts include only files with WORKFLOW_SUCCESSFUL=Y. Added MARS-1409.';
-- Total statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.FILE_COUNT IS 'Total number of files present in the ODS external table, regardless of workflow success status.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.ROW_COUNT IS 'Total row count across all files in the ODS external table.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.TOTAL_SIZE IS 'Total size in bytes of all files in the ODS bucket location.';
-- Over-threshold statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_FILE_COUNT IS 'Number of files that satisfy the archival threshold condition. When IS_WORKFLOW_SUCCESS_REQUIRED=Y, also requires WORKFLOW_SUCCESSFUL=Y.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_ROW_COUNT IS 'Row count for files that satisfy the archival threshold condition.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.OVER_ARCH_THRESOLD_TOTAL_SIZE IS 'Size in bytes for files that satisfy the archival threshold condition.';
-- Workflow-success statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_FILE_COUNT IS 'Count of files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_ROW_COUNT IS 'Row count for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT.WORKFLOW_SUCCESS_TOTAL_SIZE IS 'Size in bytes for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
-- Note: A_TABLE_STAT_UK1 index is auto-created by the UNIQUE constraint definition above.

View File

@@ -0,0 +1,53 @@
-- ====================================================================
-- A_TABLE_STAT_HIST Table
-- ====================================================================
-- Purpose: Store historical table statistics for trend analysis
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
-- === Identity / metadata ===
A_TABLE_STAT_HIST_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
-- === Archival configuration snapshot (values at gather time) ===
ARCHIVAL_STRATEGY VARCHAR2(30),
ARCH_MINIMUM_AGE_MONTHS NUMBER(4,0),
ARCH_THRESHOLD_DAYS NUMBER(4,0),
IS_WORKFLOW_SUCCESS_REQUIRED CHAR(1),
-- === Total statistics (all files, no workflow filter) ===
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
TOTAL_SIZE NUMBER(38,0),
-- === Over-archival-threshold statistics ===
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_TOTAL_SIZE NUMBER(38,0),
-- === Workflow-success statistics (WORKFLOW_SUCCESSFUL='Y' files only) ===
WORKFLOW_SUCCESS_FILE_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_ROW_COUNT NUMBER(38,0),
WORKFLOW_SUCCESS_TOTAL_SIZE NUMBER(38,0)
) TABLESPACE "DATA";
-- Identity / metadata
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.A_TABLE_STAT_HIST_KEY IS 'Primary key, populated from A_TABLE_STAT_KEY_SEQ sequence (shared with A_TABLE_STAT).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.A_SOURCE_FILE_CONFIG_KEY IS 'Foreign key to A_SOURCE_FILE_CONFIG. Multiple history rows per config entry (no unique constraint).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.TABLE_NAME IS 'Fully qualified ODS external table name (SCHEMA.TABLE) for which statistics were gathered.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.CREATED IS 'Timestamp when the statistics snapshot was taken by FILE_ARCHIVER.GATHER_TABLE_STAT.';
-- Archival configuration snapshot
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCHIVAL_STRATEGY IS 'Archival strategy active when statistics were gathered (THRESHOLD_BASED, MINIMUM_AGE_MONTHS, HYBRID). Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_MINIMUM_AGE_MONTHS IS 'Minimum age threshold in months copied from A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS. Populated for MINIMUM_AGE_MONTHS and HYBRID strategies; NULL for THRESHOLD_BASED. Populated by FILE_ARCHIVER.GATHER_TABLE_STAT (MARS-1409).';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ARCH_THRESHOLD_DAYS IS 'Archive threshold in days copied from A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS. Used by THRESHOLD_BASED and HYBRID strategies.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.IS_WORKFLOW_SUCCESS_REQUIRED IS 'Snapshot of A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED at gather time. Y = OVER_ARCH_THRESOLD counts include only files with WORKFLOW_SUCCESSFUL=Y. Added MARS-1409.';
-- Total statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.FILE_COUNT IS 'Total number of files present in the ODS external table at gather time, regardless of workflow success status.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.ROW_COUNT IS 'Total row count across all files in the ODS external table at gather time.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.TOTAL_SIZE IS 'Total size in bytes of all files in the ODS bucket location at gather time.';
-- Over-threshold statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_FILE_COUNT IS 'Number of files that satisfied the archival threshold condition. When IS_WORKFLOW_SUCCESS_REQUIRED=Y, also required WORKFLOW_SUCCESSFUL=Y.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_ROW_COUNT IS 'Row count for files that satisfied the archival threshold condition.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.OVER_ARCH_THRESOLD_TOTAL_SIZE IS 'Size in bytes for files that satisfied the archival threshold condition.';
-- Workflow-success statistics
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_FILE_COUNT IS 'Count of files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_ROW_COUNT IS 'Row count for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';
COMMENT ON COLUMN CT_MRDS.A_TABLE_STAT_HIST.WORKFLOW_SUCCESS_TOTAL_SIZE IS 'Size in bytes for files with WORKFLOW_SUCCESSFUL=Y. Always populated regardless of IS_WORKFLOW_SUCCESS_REQUIRED flag. Added MARS-1409.';

View File

@@ -0,0 +1,62 @@
WHENEVER SQLERROR CONTINUE
GRANT SELECT, INSERT, UPDATE, DELETE ON ct_ods.a_load_history TO ct_mrds;
WHENEVER SQLERROR EXIT SQL.SQLCODE
-- ============================================================================
-- A_WORKFLOW_HISTORY Trigger Definition
-- ============================================================================
CREATE OR REPLACE EDITIONABLE TRIGGER "CT_MRDS"."A_WORKFLOW_HISTORY"
AFTER INSERT OR UPDATE OF workflow_successful ON ct_mrds.a_workflow_history
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
IF :new.service_name = 'ODS' AND :new.workflow_name IN (
'w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL',
'w_ODS_TMS_LIMIT_ACCESS', 'w_ODS_TMS_PORTFOLIO_ACCESS', 'w_ODS_TMS_PORTFOLIO_TREE',
'w_ODS_TMS_COLLATERAL_INVENTORY', 'w_ODS_TOP_FULLBIDARRAY_COMPILED', 'w_ODS_TOP_ANNOUNCEMENT',
'w_ODS_TOP_ALLOTMENT_MODIFICATIONS', 'w_ODS_TOP_ALLOTMENT', 'w_ODS_CEPH_PRICING', 'w_ODS_C2D_MPEC'
) THEN
IF :new.workflow_successful = 'Y' AND :new.workflow_successful <> NVL(:old.workflow_successful, 'N') THEN
CASE
WHEN :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
WHEN :new.workflow_name = 'w_ODS_TMS_LIMIT_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_LIMITACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_TREE' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOTREE';
WHEN :new.workflow_name = 'w_ODS_TMS_COLLATERAL_INVENTORY' THEN v_workflow_name := 'w_ODS_TMS_RAR_RARCOLLATERALINVENTORY';
WHEN :new.workflow_name = 'w_ODS_TOP_FULLBIDARRAY_COMPILED' THEN v_workflow_name := 'w_ODS_TOP_FULLBIDARRAY_COMPILED';
WHEN :new.workflow_name = 'w_ODS_TOP_ANNOUNCEMENT' THEN v_workflow_name := 'w_ODS_TOP_ANNOUNCEMENT';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT';
WHEN :new.workflow_name = 'w_ODS_CEPH_PRICING' THEN v_workflow_name := 'w_ODS_CEPH_PRICING';
WHEN :new.workflow_name = 'w_ODS_C2D_MPEC' THEN v_workflow_name := 'w_ODS_C2D_MPEC';
ELSE
v_workflow_name := :new.workflow_name;
END CASE;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION WHEN OTHERS THEN NULL;
END;
INSERT INTO ct_ods.a_load_history (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end, exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end, NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
-- MARS-1409: When workflow completes successfully, mark linked files as INGESTED
IF :new.workflow_successful = 'Y' THEN
IF INSERTING OR (UPDATING AND (:old.workflow_successful IS NULL OR :old.workflow_successful != 'Y')) THEN
UPDATE CT_MRDS.A_SOURCE_FILE_RECEIVED
SET PROCESSING_STATUS = 'INGESTED',
PROCESS_NAME = :new.service_name
WHERE A_WORKFLOW_HISTORY_KEY = :new.a_workflow_history_key;
END IF;
END IF;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(19) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(50) := 'MRDS Development Team';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.6.3 (2026-01-28): COMPILATION FIX - Resolved ORA-00904 error in EXPORT_PARTITION_PARALLEL. SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE cannot be used directly in SQL UPDATE statements. Now properly assigned to vgMsgTmp variable before UPDATE.' || CHR(10) ||
'v2.6.2 (2026-01-28): CRITICAL FIX - Race condition when multiple exports run simultaneously. Changed DELETE to filter by age (>24h) instead of deleting all COMPLETED chunks. Prevents concurrent sessions from deleting each other chunks. Session-safe cleanup with TASK_NAME filtering. Enables true parallel execution of multiple export jobs.' || CHR(10) ||
'v2.6.1 (2026-01-28): Added DELETE_FAILED_EXPORT_FILE procedure to clean up partial/corrupted files before retry. When partition fails mid-export, partial file is deleted before retry to prevent Oracle from creating _1 suffixed duplicates. Ensures clean retry without orphaned files in OCI bucket.' || CHR(10) ||
'v2.6.0 (2026-01-28): CRITICAL FIX - Added STATUS tracking to A_PARALLEL_EXPORT_CHUNKS table to prevent data duplication on retry. System now restarts ONLY failed partitions instead of re-exporting all data. Added ERROR_MESSAGE and EXPORT_TIMESTAMP columns for better error handling and monitoring. Prevents duplicate file creation when parallel tasks fail (e.g., 22 partitions with 16 threads, 3 failures no longer duplicates 19 successful exports).' || CHR(10) ||
'v2.5.0 (2026-01-26): Added recorddelimiter parameter with CRLF (CHR(13)||CHR(10)) for CSV exports to ensure Windows-compatible line endings. Improves cross-platform compatibility when CSV files are opened in Windows applications (Notepad, Excel).' || CHR(10) ||
'v2.4.0 (2026-01-11): Added pTemplateTableName parameter for per-column date format configuration. Implements dynamic query building with TO_CHAR for each date/timestamp column using FILE_MANAGER.GET_DATE_FORMAT. Supports 3-tier hierarchy: column-specific, template DEFAULT, global fallback. Eliminates single dateformat limitation of DBMS_CLOUD.EXPORT_DATA.' || CHR(10) ||
'v2.3.0 (2025-12-20): Added parallel partition processing using DBMS_PARALLEL_EXECUTE. New pParallelDegree parameter (1-16, default 1) for EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures. Each year/month partition processed in separate thread for improved performance.' || CHR(10) ||
'v2.2.0 (2025-12-19): DRY refactoring - extracted shared helper functions (sanitizeFilename, VALIDATE_TABLE_AND_COLUMNS, GET_PARTITIONS, EXPORT_SINGLE_PARTITION worker procedure). Reduced code duplication by ~400 lines. Prepared architecture for v2.3.0 parallel processing.' || CHR(10) ||
'v2.1.1 (2025-12-04): Fixed JOIN column reference A_WORKFLOW_HISTORY_KEY -> A_ETL_LOAD_SET_KEY, added consistent column mapping and dynamic column list to EXPORT_TABLE_DATA procedure, enhanced DEBUG logging for all export operations' || CHR(10) ||
'v2.1.0 (2025-10-22): Added version tracking and PARTITION_YEAR/PARTITION_MONTH support' || CHR(10) ||
'v2.0.0 (2025-10-01): Separated export functionality from FILE_MANAGER package' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into CSV file on OCI infrustructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports'
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17'
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,638 @@
create or replace PACKAGE CT_MRDS.ENV_MANAGER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select ENV_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.3.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-27 09:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.0 (2026-02-27): MARS-1409 - Added error codes for A_WORKFLOW_HISTORY_KEY validation (CODE_WORKFLOW_KEY_NULL -20035, CODE_MULTIPLE_WORKFLOW_KEYS -20036)' || CHR(13)||CHR(10) ||
'3.2.0 (2025-12-20): Added error codes for parallel execution support (CODE_INVALID_PARALLEL_DEGREE -20110, CODE_PARALLEL_EXECUTION_FAILED -20111)' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-22): Added package hash tracking and automatic change detection system (SHA256 hashing)' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-22): Added package versioning system with centralized version management functions' || CHR(13)||CHR(10) ||
'2.1.0 (2025-10-15): Added ANALYZE_VALIDATION_ERRORS function for comprehensive CSV validation analysis' || CHR(13)||CHR(10) ||
'2.0.0 (2025-10-01): Added LOG_PROCESS_ERROR procedure with enhanced error diagnostics and stack traces' || CHR(13)||CHR(10) ||
'1.5.0 (2025-09-20): Added console logging support with gvConsoleLoggingEnabled configuration' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-01): Initial release with error management and configuration system';
TYPE Error_Record IS RECORD (
code PLS_INTEGER,
message VARCHAR2(4000)
);
TYPE tErrorList IS TABLE OF Error_Record INDEX BY PLS_INTEGER;
Errors tErrorList;
guid VARCHAR2(32);
gvEnv VARCHAR2(200);
gvUsername VARCHAR2(128);
gvOsuser VARCHAR2(128);
gvMachine VARCHAR2(64);
gvModule VARCHAR2(64);
gvNameSpace VARCHAR2(200);
gvRegion VARCHAR2(200);
gvDataBucketName VARCHAR2(200);
gvInboxBucketName VARCHAR2(200);
gvArchiveBucketName VARCHAR2(200);
gvDataBucketUri VARCHAR2(200);
gvInboxBucketUri VARCHAR2(200);
gvArchiveBucketUri VARCHAR2(200);
gvCredentialName VARCHAR2(200);
-- Overwritten by variable "LoggingEnabled" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
gvLoggingEnabled VARCHAR2(3) := 'ON'; -- 'ON' or 'OFF'
-- Overwritten by variable "MinLogLevel" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
-- Possible values: DEBUG ,INFO ,WARNING ,ERROR
gvMinLogLevel VARCHAR2(10) := 'DEBUG';
-- Overwritten by variable "DefaultDateFormat" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
gvDefaultDateFormat VARCHAR2(200) := 'DD/MM/YYYY HH24:MI:SS';
-- Overwritten by variable "ConsoleLoggingEnabled" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
gvConsoleLoggingEnabled VARCHAR2(3) := 'ON'; -- 'ON' or 'OFF'
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgSourceFileConfigKey PLS_INTEGER;
vgMsgTmp VARCHAR2(32000);
--Exceptions
ERR_EMPTY_FILEURI_AND_RECKEY EXCEPTION;
CODE_EMPTY_FILEURI_AND_RECKEY CONSTANT PLS_INTEGER := -20001;
MSG_EMPTY_FILEURI_AND_RECKEY VARCHAR2(4000) := 'Either pFileUri or pSourceFileReceivedKey must be not null';
PRAGMA EXCEPTION_INIT( ERR_EMPTY_FILEURI_AND_RECKEY
,CODE_EMPTY_FILEURI_AND_RECKEY);
ERR_NO_CONFIG_MATCH_FOR_FILEURI EXCEPTION;
CODE_NO_CONFIG_MATCH_FOR_FILEURI CONSTANT PLS_INTEGER := -20002;
MSG_NO_CONFIG_MATCH_FOR_FILEURI VARCHAR2(4000) := 'No match for source file in A_SOURCE_FILE_CONFIG table'
||cgBL||' The file provided in parameter: pFileUri does not have '
||cgBL||' coresponding configuration in A_SOURCE_FILE_CONFIG table';
PRAGMA EXCEPTION_INIT( ERR_NO_CONFIG_MATCH_FOR_FILEURI
,CODE_NO_CONFIG_MATCH_FOR_FILEURI);
ERR_MULTIPLE_MATCH_FOR_SRCFILE EXCEPTION;
CODE_MULTIPLE_MATCH_FOR_SRCFILE CONSTANT PLS_INTEGER := -20003;
MSG_MULTIPLE_MATCH_FOR_SRCFILE VARCHAR2(4000) := 'Multiple match for source file in A_SOURCE_FILE_CONFIG table';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_MATCH_FOR_SRCFILE
,CODE_MULTIPLE_MATCH_FOR_SRCFILE);
ERR_MISSING_COLUMN_DATE_FORMAT EXCEPTION;
CODE_MISSING_COLUMN_DATE_FORMAT CONSTANT PLS_INTEGER := -20004;
MSG_MISSING_COLUMN_DATE_FORMAT VARCHAR2(4000) := 'Missing entry in config table: A_COLUMN_DATE_FORMAT primary key(TEMPLATE_TABLE_NAME, COLUMN_NAME)'
||cgBL||' Remember: each column which data_type IN (''DATE'', ''TIMESTAMP'')'
||cgBL||' should have DateFormat specified in A_COLUMN_DATE_FORMAT table '
||cgBL||' for example: ''YYYY-MM-DD''';
PRAGMA EXCEPTION_INIT( ERR_MISSING_COLUMN_DATE_FORMAT
,CODE_MISSING_COLUMN_DATE_FORMAT);
ERR_MULTIPLE_COLUMN_DATE_FORMAT EXCEPTION;
CODE_MULTIPLE_COLUMN_DATE_FORMAT CONSTANT PLS_INTEGER := -20005;
MSG_MULTIPLE_COLUMN_DATE_FORMAT VARCHAR2(4000) := 'Multiple records for date format in A_COLUMN_DATE_FORMAT table'
||cgBL||' There should be only one format specified for each DAT/TIMESTAMP column';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_COLUMN_DATE_FORMAT
,CODE_MULTIPLE_COLUMN_DATE_FORMAT);
ERR_DIDNT_GET_LOAD_OPERATION_ID EXCEPTION;
CODE_DIDNT_GET_LOAD_OPERATION_ID CONSTANT PLS_INTEGER := -20006;
MSG_DIDNT_GET_LOAD_OPERATION_ID VARCHAR2(4000) := 'Didnt get load operation id from external table validation';
PRAGMA EXCEPTION_INIT( ERR_DIDNT_GET_LOAD_OPERATION_ID
,CODE_DIDNT_GET_LOAD_OPERATION_ID);
ERR_NO_CONFIG_FOR_RECEIVED_FILE EXCEPTION;
CODE_NO_CONFIG_FOR_RECEIVED_FILE CONSTANT PLS_INTEGER := -20007;
MSG_NO_CONFIG_FOR_RECEIVED_FILE VARCHAR2(4000) := 'No match for received source file in A_SOURCE_FILE_CONFIG '
||cgBL||' or missing data in A_SOURCE_FILE_RECEIVED table for provided pSourceFileReceivedKey parameter';
PRAGMA EXCEPTION_INIT( ERR_NO_CONFIG_FOR_RECEIVED_FILE
,CODE_NO_CONFIG_FOR_RECEIVED_FILE);
ERR_MULTI_CONFIG_FOR_RECEIVED_FILE EXCEPTION;
CODE_MULTI_CONFIG_FOR_RECEIVED_FILE CONSTANT PLS_INTEGER := -20008;
MSG_MULTI_CONFIG_FOR_RECEIVED_FILE VARCHAR2(4000) := 'Multiple matchs for received source file in A_SOURCE_FILE_CONFIG';
PRAGMA EXCEPTION_INIT( ERR_MULTI_CONFIG_FOR_RECEIVED_FILE
,CODE_MULTI_CONFIG_FOR_RECEIVED_FILE);
ERR_FILE_NOT_FOUND_ON_CLOUD EXCEPTION;
CODE_FILE_NOT_FOUND_ON_CLOUD CONSTANT PLS_INTEGER := -20009;
MSG_FILE_NOT_FOUND_ON_CLOUD VARCHAR2(4000) := 'File not found on the cloud';
PRAGMA EXCEPTION_INIT( ERR_FILE_NOT_FOUND_ON_CLOUD
,CODE_FILE_NOT_FOUND_ON_CLOUD);
ERR_FILE_VALIDATION_FAILED EXCEPTION;
CODE_FILE_VALIDATION_FAILED CONSTANT PLS_INTEGER := -20010;
MSG_FILE_VALIDATION_FAILED VARCHAR2(4000) := 'File validation failed';
PRAGMA EXCEPTION_INIT( ERR_FILE_VALIDATION_FAILED
,CODE_FILE_VALIDATION_FAILED);
ERR_EXCESS_COLUMNS_DETECTED EXCEPTION;
CODE_EXCESS_COLUMNS_DETECTED CONSTANT PLS_INTEGER := -20011;
MSG_EXCESS_COLUMNS_DETECTED VARCHAR2(4000) := 'CSV file contains more columns than template allows';
PRAGMA EXCEPTION_INIT( ERR_EXCESS_COLUMNS_DETECTED
,CODE_EXCESS_COLUMNS_DETECTED);
ERR_NO_CONFIG_MATCH EXCEPTION;
CODE_NO_CONFIG_MATCH CONSTANT PLS_INTEGER := -20012;
MSG_NO_CONFIG_MATCH VARCHAR2(4000) := 'No match for specified parameters in A_SOURCE_FILE_CONFIG table';
PRAGMA EXCEPTION_INIT( ERR_NO_CONFIG_MATCH
,CODE_NO_CONFIG_MATCH);
ERR_UNKNOWN_PREFIX EXCEPTION;
CODE_UNKNOWN_PREFIX CONSTANT PLS_INTEGER := -20013;
MSG_UNKNOWN_PREFIX VARCHAR2(4000) := 'Unknown prefix';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN_PREFIX
,CODE_UNKNOWN_PREFIX);
ERR_TABLE_NOT_EXISTS EXCEPTION;
CODE_TABLE_NOT_EXISTS CONSTANT PLS_INTEGER := -20014;
MSG_TABLE_NOT_EXISTS VARCHAR2(4000) := 'Table does not exist';
PRAGMA EXCEPTION_INIT( ERR_TABLE_NOT_EXISTS
,CODE_TABLE_NOT_EXISTS);
ERR_COLUMN_NOT_EXISTS EXCEPTION;
CODE_COLUMN_NOT_EXISTS CONSTANT PLS_INTEGER := -20015;
MSG_COLUMN_NOT_EXISTS VARCHAR2(4000) := 'Column does not exist in table';
PRAGMA EXCEPTION_INIT( ERR_COLUMN_NOT_EXISTS
,CODE_COLUMN_NOT_EXISTS);
ERR_UNSUPPORTED_DATA_TYPE EXCEPTION;
CODE_UNSUPPORTED_DATA_TYPE CONSTANT PLS_INTEGER := -20016;
MSG_UNSUPPORTED_DATA_TYPE VARCHAR2(4000) := 'Unsupported data type';
PRAGMA EXCEPTION_INIT( ERR_UNSUPPORTED_DATA_TYPE
,CODE_UNSUPPORTED_DATA_TYPE);
ERR_MISSING_SOURCE_KEY EXCEPTION;
CODE_MISSING_SOURCE_KEY CONSTANT PLS_INTEGER := -20017;
MSG_MISSING_SOURCE_KEY VARCHAR2(4000) := 'The Source was not found in parent table A_SOURCE';
PRAGMA EXCEPTION_INIT( ERR_MISSING_SOURCE_KEY
,CODE_MISSING_SOURCE_KEY);
ERR_NULL_SOURCE_FILE_CONFIG_KEY EXCEPTION;
CODE_NULL_SOURCE_FILE_CONFIG_KEY CONSTANT PLS_INTEGER := -20018;
MSG_NULL_SOURCE_FILE_CONFIG_KEY VARCHAR2(4000) := 'No entry in A_SOURCE_FILE_CONFIG table for specified A_SOURCE_FILE_CONFIG_KEY';
PRAGMA EXCEPTION_INIT( ERR_NULL_SOURCE_FILE_CONFIG_KEY
,CODE_NULL_SOURCE_FILE_CONFIG_KEY);
ERR_DUPLICATED_SOURCE_KEY EXCEPTION;
CODE_DUPLICATED_SOURCE_KEY CONSTANT PLS_INTEGER := -20019;
MSG_DUPLICATED_SOURCE_KEY VARCHAR2(4000) := 'The Source already exists in the A_SOURCE table';
PRAGMA EXCEPTION_INIT( ERR_DUPLICATED_SOURCE_KEY
,CODE_DUPLICATED_SOURCE_KEY);
ERR_MISSING_CONTAINER_CONFIG EXCEPTION;
CODE_MISSING_CONTAINER_CONFIG CONSTANT PLS_INTEGER := -20020;
MSG_MISSING_CONTAINER_CONFIG VARCHAR2(4000) := 'No match in A_SOURCE_FILE_CONFIG table where SOURCE_FILE_TYPE=''CONTAINER'' and specified SOURCE_FILE_ID';
PRAGMA EXCEPTION_INIT( ERR_MISSING_CONTAINER_CONFIG
,CODE_MISSING_CONTAINER_CONFIG);
ERR_MULTIPLE_CONTAINER_ENTRIES EXCEPTION;
CODE_MULTIPLE_CONTAINER_ENTRIES CONSTANT PLS_INTEGER := -20021;
MSG_MULTIPLE_CONTAINER_ENTRIES VARCHAR2(4000) := 'Multiple matches in A_SOURCE_FILE_CONFIG table where SOURCE_FILE_TYPE=''CONTAINER'' and specified SOURCE_FILE_ID';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_CONTAINER_ENTRIES
,CODE_MULTIPLE_CONTAINER_ENTRIES);
ERR_WRONG_DESTINATION_PARAM EXCEPTION;
CODE_WRONG_DESTINATION_PARAM CONSTANT PLS_INTEGER := -20022;
MSG_WRONG_DESTINATION_PARAM VARCHAR2(4000) := 'Wrong destination parameter provided.';
PRAGMA EXCEPTION_INIT( ERR_WRONG_DESTINATION_PARAM
,CODE_WRONG_DESTINATION_PARAM);
ERR_FILE_NOT_EXISTS_ON_CLOUD EXCEPTION;
CODE_FILE_NOT_EXISTS_ON_CLOUD CONSTANT PLS_INTEGER := -20023;
MSG_FILE_NOT_EXISTS_ON_CLOUD VARCHAR2(4000) := 'File not exists on cloud.';
PRAGMA EXCEPTION_INIT( ERR_FILE_NOT_EXISTS_ON_CLOUD
,CODE_FILE_NOT_EXISTS_ON_CLOUD);
ERR_FILE_ALREADY_REGISTERED EXCEPTION;
CODE_FILE_ALREADY_REGISTERED CONSTANT PLS_INTEGER := -20024;
MSG_FILE_ALREADY_REGISTERED VARCHAR2(4000) := 'File already registered in A_SOURCE_FILE_RECEIVED table.';
PRAGMA EXCEPTION_INIT( ERR_FILE_ALREADY_REGISTERED
,CODE_FILE_ALREADY_REGISTERED);
ERR_WRONG_DATE_TIMESTAMP_FORMAT EXCEPTION;
CODE_WRONG_DATE_TIMESTAMP_FORMAT CONSTANT PLS_INTEGER := -20025;
MSG_WRONG_DATE_TIMESTAMP_FORMAT VARCHAR2(4000) := 'Provided DATE or TIMESTAMP format has errors (possible duplicated codes, ex: ''DD'').';
PRAGMA EXCEPTION_INIT( ERR_WRONG_DATE_TIMESTAMP_FORMAT
,CODE_WRONG_DATE_TIMESTAMP_FORMAT);
ERR_ENVIRONMENT_NOT_SET EXCEPTION;
CODE_ENVIRONMENT_NOT_SET CONSTANT PLS_INTEGER := -20026;
MSG_ENVIRONMENT_NOT_SET VARCHAR2(4000) := 'EnvironmentID not set'
||cgBL||' Information about environment is needed to get proper configuration values.'
||cgBL||' It can be set up in two different ways:'
||cgBL||' 1. Set it on session level: execute DBMS_SESSION.SET_IDENTIFIER (client_id => ''dev'')'
||cgBL||' 2. Set it on configuration level: Insert into CT_MRDS.A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID,CONFIG_VARIABLE,CONFIG_VARIABLE_VALUE) values (''default'',''environment_id'',''dev'')'
||cgBL||' Session level setup (1.) takes precedence over configuration level one (2.)'
;
PRAGMA EXCEPTION_INIT( ERR_ENVIRONMENT_NOT_SET
,CODE_ENVIRONMENT_NOT_SET);
ERR_CONFIG_VARIABLE_NOT_SET EXCEPTION;
CODE_CONFIG_VARIABLE_NOT_SET CONSTANT PLS_INTEGER := -20027;
MSG_CONFIG_VARIABLE_NOT_SET VARCHAR2(4000) := 'Missing configuration value in A_FILE_MANAGER_CONFIG';
PRAGMA EXCEPTION_INIT( ERR_CONFIG_VARIABLE_NOT_SET
,CODE_CONFIG_VARIABLE_NOT_SET);
ERR_NOT_INPUT_SOURCE_FILE_TYPE EXCEPTION;
CODE_NOT_INPUT_SOURCE_FILE_TYPE CONSTANT PLS_INTEGER := -20028;
MSG_NOT_INPUT_SOURCE_FILE_TYPE VARCHAR2(4000) := 'Archival can be executed only for A_SOURCE_FILE_CONFIG_KEY where SOURCE_FILE_TYPE=''INPUT''';
PRAGMA EXCEPTION_INIT( ERR_NOT_INPUT_SOURCE_FILE_TYPE
,CODE_NOT_INPUT_SOURCE_FILE_TYPE);
ERR_EXP_DATA_FOR_ARCH_FAILED EXCEPTION;
CODE_EXP_DATA_FOR_ARCH_FAILED CONSTANT PLS_INTEGER := -20029;
MSG_EXP_DATA_FOR_ARCH_FAILED VARCHAR2(4000) := 'Export data for archival failed.';
PRAGMA EXCEPTION_INIT( ERR_EXP_DATA_FOR_ARCH_FAILED
,CODE_EXP_DATA_FOR_ARCH_FAILED);
ERR_RESTORE_FILE_FROM_TRASH EXCEPTION;
CODE_RESTORE_FILE_FROM_TRASH CONSTANT PLS_INTEGER := -20030;
MSG_RESTORE_FILE_FROM_TRASH VARCHAR2(4000) := 'Unexpected issues occured while archival process. Restoration of exported files failed.';
PRAGMA EXCEPTION_INIT( ERR_RESTORE_FILE_FROM_TRASH
,CODE_RESTORE_FILE_FROM_TRASH);
ERR_CHANGE_STAT_TO_ARCHIVED_FAILED EXCEPTION;
CODE_CHANGE_STAT_TO_ARCHIVED_FAILED CONSTANT PLS_INTEGER := -20031;
MSG_CHANGE_STAT_TO_ARCHIVED_FAILED VARCHAR2(4000) := 'Failed to change file status to: ARCHIVED in A_SOURCE_FILE_RECEIVED table.';
PRAGMA EXCEPTION_INIT( ERR_CHANGE_STAT_TO_ARCHIVED_FAILED
,CODE_CHANGE_STAT_TO_ARCHIVED_FAILED);
ERR_MOVE_FILE_TO_TRASH_FAILED EXCEPTION;
CODE_MOVE_FILE_TO_TRASH_FAILED CONSTANT PLS_INTEGER := -20032;
MSG_MOVE_FILE_TO_TRASH_FAILED VARCHAR2(4000) := 'FAILED to move file to TRASH before DROPPING it.';
PRAGMA EXCEPTION_INIT( ERR_MOVE_FILE_TO_TRASH_FAILED
,CODE_MOVE_FILE_TO_TRASH_FAILED);
ERR_DROP_EXPORTED_FILES_FAILED EXCEPTION;
CODE_DROP_EXPORTED_FILES_FAILED CONSTANT PLS_INTEGER := -20033;
MSG_DROP_EXPORTED_FILES_FAILED VARCHAR2(4000) := 'FAILED to move file to TRASH before DROPPING it.';
PRAGMA EXCEPTION_INIT( ERR_DROP_EXPORTED_FILES_FAILED
,CODE_DROP_EXPORTED_FILES_FAILED);
ERR_INVALID_BUCKET_AREA EXCEPTION;
CODE_INVALID_BUCKET_AREA CONSTANT PLS_INTEGER := -20034;
MSG_INVALID_BUCKET_AREA VARCHAR2(4000) := 'Invalid bucket area specified. Valid values: INBOX, ODS, DATA, ARCHIVE';
PRAGMA EXCEPTION_INIT( ERR_INVALID_BUCKET_AREA
,CODE_INVALID_BUCKET_AREA);
ERR_WORKFLOW_KEY_NULL EXCEPTION;
CODE_WORKFLOW_KEY_NULL CONSTANT PLS_INTEGER := -20035;
MSG_WORKFLOW_KEY_NULL VARCHAR2(4000) := 'File validation failed: A_WORKFLOW_HISTORY_KEY column contains NULL value';
PRAGMA EXCEPTION_INIT( ERR_WORKFLOW_KEY_NULL
,CODE_WORKFLOW_KEY_NULL);
ERR_MULTIPLE_WORKFLOW_KEYS EXCEPTION;
CODE_MULTIPLE_WORKFLOW_KEYS CONSTANT PLS_INTEGER := -20036;
MSG_MULTIPLE_WORKFLOW_KEYS VARCHAR2(4000) := 'File validation failed: Multiple distinct A_WORKFLOW_HISTORY_KEY values found in file. Each file must contain exactly one workflow execution key';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_WORKFLOW_KEYS
,CODE_MULTIPLE_WORKFLOW_KEYS);
ERR_INVALID_PARALLEL_DEGREE EXCEPTION;
CODE_INVALID_PARALLEL_DEGREE CONSTANT PLS_INTEGER := -20110;
MSG_INVALID_PARALLEL_DEGREE VARCHAR2(4000) := 'Invalid parallel degree parameter. Must be between 1 and 16';
PRAGMA EXCEPTION_INIT( ERR_INVALID_PARALLEL_DEGREE
,CODE_INVALID_PARALLEL_DEGREE);
ERR_PARALLEL_EXECUTION_FAILED EXCEPTION;
CODE_PARALLEL_EXECUTION_FAILED CONSTANT PLS_INTEGER := -20111;
MSG_PARALLEL_EXECUTION_FAILED VARCHAR2(4000) := 'Parallel execution failed';
PRAGMA EXCEPTION_INIT( ERR_PARALLEL_EXECUTION_FAILED
,CODE_PARALLEL_EXECUTION_FAILED);
ERR_UNKNOWN EXCEPTION;
CODE_UNKNOWN CONSTANT PLS_INTEGER := -20999;
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occurred';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN
,CODE_UNKNOWN);
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
/**
* @name LOG_PROCESS_EVENT
* @desc Insert a new log record into A_PROCESS_LOG table.
* Also outputs to console if gvConsoleLoggingEnabled = 'ON'.
* Respects logging level configuration (gvMinLogLevel).
* @example ENV_MANAGER.LOG_PROCESS_EVENT('Process completed successfully', 'INFO', 'pParam1=value1');
* @ex_rslt Record inserted into A_PROCESS_LOG table and optionally displayed in console output
**/
PROCEDURE LOG_PROCESS_EVENT (
pLogMessage VARCHAR2
,pLogLevel VARCHAR2 DEFAULT 'ERROR'
,pParameters VARCHAR2 DEFAULT NULL
,pProcessName VARCHAR2 DEFAULT 'FILE_MANAGER'
);
/**
* @name LOG_PROCESS_ERROR
* @desc Insert a detailed error record into A_PROCESS_LOG table with full stack trace, backtrace, and call stack.
* This procedure captures comprehensive error information for debugging purposes while
* allowing clean user-facing error messages to be raised separately.
* @param pLogMessage - Base error message description
* @param pParameters - Procedure parameters for context
* @param pProcessName - Name of the calling process/package
* @ex_rslt Record inserted into A_PROCESS_LOG table with complete error stack information
*/
PROCEDURE LOG_PROCESS_ERROR (
pLogMessage VARCHAR2
,pParameters VARCHAR2 DEFAULT NULL
,pProcessName VARCHAR2 DEFAULT 'FILE_MANAGER'
);
/**
* @name INIT_ERRORS
* @desc Loads data into Errors array.
* Errors array is a list of Record(Error_Code, Error_Message) index by Error_Code.
* Called automatically during package initialization.
* @example Called automatically when package is first referenced
* @ex_rslt Errors array populated with all error codes and messages
**/
PROCEDURE INIT_ERRORS;
/**
* @name GET_DEFAULT_ENV
* @desc It returns string with name of default environment.
* Return string is A_FILE_MANAGER_CONFIG.ENVIRONMENT_ID value.
* @example select ENV_MANAGER.GET_DEFAULT_ENV() from dual;
* @ex_rslt dev
**/
FUNCTION GET_DEFAULT_ENV
RETURN VARCHAR2;
/**
* @name INIT_VARIABLES
* @desc For specified pEnv parameter (A_FILE_MANAGER_CONFIG.ENVIRONMENT_ID)
* Assign values to following global package variables:
* - gvNameSpace
* - gvRegion
* - gvCredentialName
* - gvInboxBucketName
* - gvDataBucketName
* - gvArchiveBucketName
* - gvInboxBucketUri
* - gvDataBucketUri
* - gvArchiveBucketUri
* - gvLoggingEnabled
* - gvMinLogLevel
* - gvDefaultDateFormat
* - gvConsoleLoggingEnabled
**/
PROCEDURE INIT_VARIABLES(
pEnv VARCHAR2
);
/**
* @name GET_ERROR_MESSAGE
* @desc It returns string with error message for specified pCode (Error_Code).
* Error message is take from Errors Array loaded by INIT_ERRORS procedure
* @example select ENV_MANAGER.GET_ERROR_MESSAGE(pCode => -20009) from dual;
* @ex_rslt File not found on the cloud
**/
FUNCTION GET_ERROR_MESSAGE(
pCode PLS_INTEGER
) RETURN VARCHAR2;
/**
* @name GET_ERROR_STACK
* @desc It returns string with all possible error stack info.
* Error message is take from Errors Array loaded by INIT_ERRORS procedure
* @example
* select ENV_MANAGER.GET_ERROR_STACK(
* pFormat => 'OUTPUT'
* ,pCode => -20009
* ,pSourceFileReceivedKey => NULL)
* from dual
* @ex_rslt
* ------------------------------------------------------+
* Error Message:
* ORA-0000: normal, successful completion
* -------------------------------------------------------
* Error Stack:
* -------------------------------------------------------
* Error Backtrace:
* ------------------------------------------------------+
**/
FUNCTION GET_ERROR_STACK(
pFormat VARCHAR2
,pCode PLS_INTEGER
,pSourceFileReceivedKey CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL
) RETURN VARCHAR2;
/**
* @name FORMAT_PARAMETERS
* @desc Formats parameter list for logging purposes.
* Converts SYS.ODCIVARCHAR2LIST to formatted string with proper NULL handling.
* @example select ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST('param1=value1', 'param2=NULL')) from dual;
* @ex_rslt param1=value1 ,
* param2=NULL
**/
FUNCTION FORMAT_PARAMETERS(
pParameterList SYS.ODCIVARCHAR2LIST
) RETURN VARCHAR2;
/**
* @name ANALYZE_VALIDATION_ERRORS
* @desc Analyzes CSV validation errors and generates detailed diagnostic report.
* Compares CSV structure with template table and provides specific error analysis.
* Includes suggested solutions for common validation issues.
* @param pValidationLogTable - Name of validation log table (e.g., VALIDATE$242_LOG)
* @param pTemplateSchema - Schema of template table (e.g., CT_ET_TEMPLATES)
* @param pTemplateTable - Name of template table (e.g., MOCK_PROC_TABLE)
* @param pCsvFileUri - URI of CSV file being validated
* @example SELECT ENV_MANAGER.ANALYZE_VALIDATION_ERRORS('VALIDATE$242_LOG', 'CT_ET_TEMPLATES', 'MOCK_PROC_TABLE', 'https://...') FROM DUAL;
* @ex_rslt Detailed validation analysis report with column mismatches and solutions
**/
FUNCTION ANALYZE_VALIDATION_ERRORS(
pValidationLogTable VARCHAR2,
pTemplateSchema VARCHAR2,
pTemplateTable VARCHAR2,
pCsvFileUri VARCHAR2
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the ENV_MANAGER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT ENV_MANAGER.GET_VERSION() FROM DUAL;
* @ex_rslt 3.0.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Formatted for display in logs or monitoring systems.
* @example SELECT ENV_MANAGER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: ENV_MANAGER
* Version: 3.0.0
* Build Date: 2025-10-22 16:00:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Shows evolution of package features over time.
* @example SELECT ENV_MANAGER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt ENV_MANAGER Version History:
* 3.0.0 (2025-10-22): Added package versioning system...
* 2.1.0 (2025-10-15): Added ANALYZE_VALIDATION_ERRORS function...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
/**
* @name GET_PACKAGE_VERSION_INFO
* @desc Universal function to get formatted version information for any package.
* This centralized function is used by all packages in the system.
* @param pPackageName - Name of the package
* @param pVersion - Version string (MAJOR.MINOR.PATCH format)
* @param pBuildDate - Build date timestamp
* @param pAuthor - Package author name
* @example SELECT ENV_MANAGER.GET_PACKAGE_VERSION_INFO('FILE_MANAGER', '2.1.0', '2025-10-22 15:00:00', 'Grzegorz Michalski') FROM DUAL;
* @ex_rslt Package: FILE_MANAGER
* Version: 2.1.0
* Build Date: 2025-10-22 15:00:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_PACKAGE_VERSION_INFO(
pPackageName VARCHAR2,
pVersion VARCHAR2,
pBuildDate VARCHAR2,
pAuthor VARCHAR2
) RETURN VARCHAR2;
/**
* @name FORMAT_VERSION_HISTORY
* @desc Universal function to format version history for any package.
* Adds package name header and proper formatting.
* @param pPackageName - Name of the package
* @param pVersionHistory - Complete version history text
* @example SELECT ENV_MANAGER.FORMAT_VERSION_HISTORY('FILE_MANAGER', '2.1.0 (2025-10-22): Export procedures...') FROM DUAL;
* @ex_rslt FILE_MANAGER Version History:
* 2.1.0 (2025-10-22): Export procedures...
**/
FUNCTION FORMAT_VERSION_HISTORY(
pPackageName VARCHAR2,
pVersionHistory VARCHAR2
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE HASH + CHANGE DETECTION FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name CALCULATE_PACKAGE_HASH
* @desc Calculates SHA256 hash of package source code from ALL_SOURCE.
* Returns hash for both SPEC and BODY (if exists).
* Used for automatic change detection.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @param pPackageType - Type of package code ('PACKAGE' for SPEC, 'PACKAGE BODY' for BODY)
* @example SELECT ENV_MANAGER.CALCULATE_PACKAGE_HASH('CT_MRDS', 'FILE_MANAGER', 'PACKAGE') FROM DUAL;
* @ex_rslt A7B3C5D9E8F1234567890ABCDEF... (64-character SHA256 hash)
**/
FUNCTION CALCULATE_PACKAGE_HASH(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2,
pPackageType VARCHAR2 -- 'PACKAGE' or 'PACKAGE BODY'
) RETURN VARCHAR2;
/**
* @name TRACK_PACKAGE_VERSION
* @desc Records package version and source code hash in A_PACKAGE_VERSION_TRACKING table.
* Automatically detects if source code changed without version update.
* Should be called after every package deployment.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @param pPackageVersion - Current version from PACKAGE_VERSION constant
* @param pPackageBuildDate - Build date from PACKAGE_BUILD_DATE constant
* @param pPackageAuthor - Author from PACKAGE_AUTHOR constant
* @example EXEC ENV_MANAGER.TRACK_PACKAGE_VERSION('CT_MRDS', 'FILE_MANAGER', '3.2.0', '2025-10-22 16:30:00', 'Grzegorz Michalski');
* @ex_rslt Record inserted into A_PACKAGE_VERSION_TRACKING with change detection status
**/
PROCEDURE TRACK_PACKAGE_VERSION(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2,
pPackageVersion VARCHAR2,
pPackageBuildDate VARCHAR2,
pPackageAuthor VARCHAR2
);
/**
* @name CHECK_PACKAGE_CHANGES
* @desc Checks if package source code has changed since last tracking.
* Compares current hash with last recorded hash in A_PACKAGE_VERSION_TRACKING.
* Returns detailed change detection report.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @example SELECT ENV_MANAGER.CHECK_PACKAGE_CHANGES('CT_MRDS', 'FILE_MANAGER') FROM DUAL;
* @ex_rslt WARNING: Package changed without version update!
* Last Version: 3.2.0
* Current Hash (SPEC): A7B3C5D9...
* Last Hash (SPEC): B8C4D6E0...
* RECOMMENDATION: Update PACKAGE_VERSION and PACKAGE_BUILD_DATE
**/
FUNCTION CHECK_PACKAGE_CHANGES(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2
) RETURN VARCHAR2;
/**
* @name GET_PACKAGE_HASH_INFO
* @desc Returns formatted information about package hash and tracking history.
* Includes current hash, last tracked hash, and change detection status.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @example SELECT ENV_MANAGER.GET_PACKAGE_HASH_INFO('CT_MRDS', 'FILE_MANAGER') FROM DUAL;
* @ex_rslt Package: CT_MRDS.FILE_MANAGER
* Current Version: 3.2.0
* Current Hash (SPEC): A7B3C5D9...
* Last Tracked: 2025-10-22 16:30:00
* Status: OK - No changes detected
**/
FUNCTION GET_PACKAGE_HASH_INFO(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2
) RETURN VARCHAR2;
END ENV_MANAGER;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,283 @@
create or replace PACKAGE CT_MRDS.FILE_ARCHIVER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select LOGGING_AND_ERROR_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.4.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-17 11:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.4.0 (2026-03-17): MARS-1409 - Added IS_WORKFLOW_SUCCESS_REQUIRED flag to A_SOURCE_FILE_CONFIG (DEFAULT Y). ' ||
'Y=standard DBT flow (WORKFLOW_SUCCESSFUL=Y required), N=bypass for manual/non-DBT sources. ' ||
'Flag value stored in A_TABLE_STAT and A_TABLE_STAT_HIST for full audit of statistics basis.' || CHR(13)||CHR(10) ||
'3.3.1 (2026-03-13): Fixed ORA-29913 handling in ARCHIVE_TABLE_DATA (graceful RETURN when ODS bucket is empty) and GATHER_TABLE_STAT (saves zero statistics instead of raising error)' || CHR(13)||CHR(10) ||
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEPT_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.2.1 (2026-02-10): Fixed status update - ARCHIVED → ARCHIVED_AND_TRASHED when moving files to TRASH folder (critical bug fix)' || CHR(13)||CHR(10) ||
'3.2.0 (2026-02-06): Added pKeepInTrash parameter (DEFAULT TRUE) to ARCHIVE_TABLE_DATA for TRASH folder retention control - files kept in TRASH subfolder (DATA bucket) by default for safety and compliance' || CHR(13)||CHR(10) ||
'3.1.2 (2026-02-06): Fixed missing PARTITION_YEAR/PARTITION_MONTH assignments in UPDATE statement and export query circular dependency (now filters by workflow_start instead of partition fields)' || CHR(13)||CHR(10) ||
'3.1.1 (2026-02-06): Fixed ORA-01422 error when DBMS_CLOUD.EXPORT_DATA creates multiple parquet files (parallel execution). Now stores archive directory prefix instead of individual filenames' || CHR(13)||CHR(10) ||
'3.1.0 (2026-01-29): Added function overloads for ARCHIVE_TABLE_DATA and GATHER_TABLE_STAT returning SQLCODE for Python library integration' || CHR(13)||CHR(10) ||
'3.0.0 (2026-01-27): MARS-828 - Added flexible archival strategies (MINIMUM_AGE_MONTHS with 0=current month, HYBRID) via ARCHIVAL_STRATEGY configuration' || CHR(13)||CHR(10) ||
'2.0.0 (2025-10-22): Added package versioning system using centralized ENV_MANAGER functions' || CHR(13)||CHR(10) ||
'1.5.0 (2025-10-18): Enhanced ARCHIVE_TABLE_DATA with Hive-style partitioning support' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-15): Initial release with table archival and statistics gathering';
cgBL CONSTANT VARCHAR2(2) := ENV_MANAGER.cgBL;
/**
* @name GET_TABLE_STAT
* @desc Private function to retrieve table statistics for archival processing.
* Returns A_TABLE_STAT record with table metadata and row counts.
* @param pSourceFileConfigKey - Configuration key for source file
* @return CT_MRDS.A_TABLE_STAT%ROWTYPE - Table statistics record
* @private Internal function for archival operations
**/
FUNCTION GET_TABLE_STAT(pSourceFileConfigKey IN NUMBER) RETURN CT_MRDS.A_TABLE_STAT%ROWTYPE;
/**
* @name ARCHIVE_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data from table specified by pSourceFileConfigKey(A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY) into PARQUET file on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
**/
PROCEDURE ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
);
/**
* @name FN_ARCHIVE_TABLE_DATA
* @desc Function wrapper for ARCHIVE_TABLE_DATA procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_TABLE_DATA procedure and captures execution result.
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_TABLE_DATA(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
) RETURN PLS_INTEGER;
/**
* @name GATHER_TABLE_STAT
* @desc Gather info about EXTERNAL TABLE specified by pSourceFileConfigKey parameter (A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY).
* Data is inserted into A_TABLE_STAT and A_TABLE_STAT_HIST.
**/
PROCEDURE GATHER_TABLE_STAT (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
);
/**
* @name FN_GATHER_TABLE_STAT
* @desc Function wrapper for GATHER_TABLE_STAT procedure.
* Returns SQLCODE for Python library integration.
* Calls the main GATHER_TABLE_STAT procedure and captures execution result.
* @example SELECT FILE_ARCHIVER.FN_GATHER_TABLE_STAT(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_GATHER_TABLE_STAT (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
) RETURN PLS_INTEGER;
/**
* @name GATHER_TABLE_STAT_ALL
* @desc Multi-level batch statistics gathering procedure with three granularity levels.
* Processes configurations based on IS_ARCHIVE_ENABLED setting (when pOnlyEnabled=TRUE).
* Gathers statistics for external tables and inserts data into A_TABLE_STAT and A_TABLE_STAT_HIST.
* @param pSourceFileConfigKey - (LEVEL 1) Gather stats for specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Gather stats for all tables in source system (e.g., 'LM', 'C2D') (medium priority)
* @param pGatherAll - (LEVEL 3) When TRUE, gather stats for ALL tables across all sources (lowest priority)
* @param pOnlyEnabled - When TRUE (default), only process tables with IS_ARCHIVE_ENABLED='Y'
* @example -- Level 1: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pSourceFileConfigKey => 123);
* @example -- Level 2: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pSourceKey => 'LM');
* @example -- Level 3: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE);
* @example -- All tables regardless of IS_ARCHIVE_ENABLED: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE, pOnlyEnabled => FALSE);
**/
PROCEDURE GATHER_TABLE_STAT_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pGatherAll IN BOOLEAN DEFAULT FALSE,
pOnlyEnabled IN BOOLEAN DEFAULT TRUE
);
/**
* @name FN_GATHER_TABLE_STAT_ALL
* @desc Function wrapper for GATHER_TABLE_STAT_ALL procedure.
* Returns SQLCODE for Python library integration.
* Calls the main GATHER_TABLE_STAT_ALL procedure and captures execution result.
* @param pSourceFileConfigKey - (LEVEL 1) Gather stats for specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Gather stats for all tables in source system (medium priority)
* @param pGatherAll - (LEVEL 3) When TRUE, gather stats for ALL tables across all sources (lowest priority)
* @param pOnlyEnabled - When TRUE (default), only process tables with IS_ARCHIVE_ENABLED='Y'
* @example SELECT FILE_ARCHIVER.FN_GATHER_TABLE_STAT_ALL(pSourceKey => 'LM') FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_GATHER_TABLE_STAT_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pGatherAll IN BOOLEAN DEFAULT FALSE,
pOnlyEnabled IN BOOLEAN DEFAULT TRUE
) RETURN PLS_INTEGER;
/**
* @name ARCHIVE_ALL
* @desc Multi-level batch archival procedure with three granularity levels.
* Only processes configurations where IS_ARCHIVE_ENABLED='Y'.
* TRASH policy for each table is controlled by individual IS_KEPT_IN_TRASH column.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (e.g., 'LM', 'C2D') (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)
* @example -- Level 1: CALL FILE_ARCHIVER.ARCHIVE_ALL(pSourceFileConfigKey => 123);
* @example -- Level 2: CALL FILE_ARCHIVER.ARCHIVE_ALL(pSourceKey => 'LM');
* @example -- Level 3: CALL FILE_ARCHIVER.ARCHIVE_ALL(pArchiveAll => TRUE);
**/
PROCEDURE ARCHIVE_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pArchiveAll IN BOOLEAN DEFAULT FALSE
);
/**
* @name FN_ARCHIVE_ALL
* @desc Function wrapper for ARCHIVE_ALL procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_ALL procedure and captures execution result.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_ALL(pSourceKey => 'LM') FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_ARCHIVE_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pArchiveAll IN BOOLEAN DEFAULT FALSE
) RETURN PLS_INTEGER;
/**
* @name RESTORE_FILE_FROM_TRASH
* @desc Restores files from TRASH folder back to ODS at three different granularity levels.
* Moves files from TRASH subfolder back to ODS subfolder in DATA bucket.
* Updates status from ARCHIVED_AND_TRASHED to INGESTED and clears archival metadata.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to restore by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Restore all files for specific configuration key (medium priority)
* @param pRestoreAll - (LEVEL 1) When TRUE, restore ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example -- Restore single file: CALL FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pSourceFileReceivedKey => 12345);
* @example -- Restore all files for config: CALL FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pSourceFileConfigKey => 341);
* @example -- Restore all TRASH globally: CALL FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pRestoreAll => TRUE);
**/
PROCEDURE RESTORE_FILE_FROM_TRASH (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pRestoreAll IN BOOLEAN DEFAULT FALSE
);
/**
* @name RESTORE_FILE_FROM_TRASH
* @desc Function overload for RESTORE_FILE_FROM_TRASH procedure.
* Returns SQLCODE for Python library integration.
* Calls the main RESTORE_FILE_FROM_TRASH procedure and captures execution result.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to restore by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Restore all files for specific configuration key (medium priority)
* @param pRestoreAll - (LEVEL 1) When TRUE, restore ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example SELECT FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pSourceFileReceivedKey => 12345) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION RESTORE_FILE_FROM_TRASH (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pRestoreAll IN BOOLEAN DEFAULT FALSE
) RETURN PLS_INTEGER;
/**
* @name PURGE_TRASH_FOLDER
* @desc Deletes files from TRASH folder at three different granularity levels.
* Updates status from ARCHIVED_AND_TRASHED to ARCHIVED_AND_PURGED for all affected files.
* WARNING: This operation is irreversible - files are permanently deleted from TRASH.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to delete by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Delete all files for specific configuration key (medium priority)
* @param pPurgeAll - (LEVEL 1) When TRUE, delete ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example -- Delete single file: CALL FILE_ARCHIVER.PURGE_TRASH_FOLDER(pSourceFileReceivedKey => 12345);
* @example -- Delete all files for config: CALL FILE_ARCHIVER.PURGE_TRASH_FOLDER(pSourceFileConfigKey => 341);
* @example -- Delete all TRASH globally: CALL FILE_ARCHIVER.PURGE_TRASH_FOLDER(pPurgeAll => TRUE);
**/
PROCEDURE PURGE_TRASH_FOLDER (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pPurgeAll IN BOOLEAN DEFAULT FALSE
);
/**
* @name PURGE_TRASH_FOLDER
* @desc Function overload for PURGE_TRASH_FOLDER procedure.
* Returns SQLCODE for Python library integration.
* Calls the main PURGE_TRASH_FOLDER procedure and captures execution result.
* WARNING: This operation is irreversible - files are permanently deleted from TRASH.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to delete by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Delete all files for specific configuration key (medium priority)
* @param pPurgeAll - (LEVEL 1) When TRUE, delete ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example SELECT FILE_ARCHIVER.PURGE_TRASH_FOLDER(pSourceFileReceivedKey => 12345) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION PURGE_TRASH_FOLDER (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pPurgeAll IN BOOLEAN DEFAULT FALSE
) RETURN PLS_INTEGER;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the FILE_ARCHIVER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT FILE_ARCHIVER.GET_VERSION() FROM DUAL;
* @ex_rslt 2.0.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Uses centralized ENV_MANAGER.GET_PACKAGE_VERSION_INFO function.
* @example SELECT FILE_ARCHIVER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: FILE_ARCHIVER
* Version: 2.0.0
* Build Date: 2025-10-22 16:45:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Uses centralized ENV_MANAGER.FORMAT_VERSION_HISTORY function.
* @example SELECT FILE_ARCHIVER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt FILE_ARCHIVER Version History:
* 2.0.0 (2025-10-22): Added package versioning system...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,658 @@
create or replace PACKAGE CT_MRDS.FILE_MANAGER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select FILE_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.6.3';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-03-17 12:30:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.6.3 (2026-03-17): MARS-828 - Added pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths to ADD_SOURCE_FILE_CONFIG; FORMAT_CONFIG now shows all A_SOURCE_FILE_CONFIG columns' || CHR(13)||CHR(10) ||
'3.6.2 (2026-03-17): MARS-1409 - Added pIsWorkflowSuccessRequired parameter to ADD_SOURCE_FILE_CONFIG; IS_WORKFLOW_SUCCESS_REQUIRED shown in GET_DET_SOURCE_FILE_CONFIG_INFO output' || CHR(13)||CHR(10) ||
'3.6.1 (2026-03-13): MARS-1468 - Fixed CHAR/NCHAR/NVARCHAR2 column definitions in GENERATE_EXTERNAL_TABLE_PARAMS: CHAR now uses char_used/char_length semantics; NCHAR/NVARCHAR2 use char_length (data_length stores bytes in AL16UTF16)' || CHR(13)||CHR(10) ||
'3.6.0 (2026-02-27): MARS-1409 - Added A_WORKFLOW_HISTORY_KEY tracking in A_SOURCE_FILE_RECEIVED. Each file now stores its workflow execution key extracted during VALIDATE_SOURCE_FILE_RECEIVED' || CHR(13)||CHR(10) ||
'3.5.1 (2026-02-24): Fixed TIMESTAMP field syntax in GENERATE_EXTERNAL_TABLE_PARAMS for SQL*Loader compatibility (CHAR(35) DATE_FORMAT TIMESTAMP MASK format)' || CHR(13)||CHR(10) ||
'3.3.2 (2026-02-20): MARS-828 - Fixed threshold column names in GET_DET_SOURCE_FILE_CONFIG_INFO for MARS-828 compatibility' || CHR(13)||CHR(10) ||
'3.3.1 (2025-11-27): MARS-1046 - Fixed ISO 8601 datetime format parsing with milliseconds and timezone (e.g., 2012-03-02T14:16:23.798+01:00)' || CHR(13)||CHR(10) ||
'3.3.0 (2025-11-26): MARS-1056 - Fixed VARCHAR2 definitions in GENERATE_EXTERNAL_TABLE_PARAMS to preserve CHAR/BYTE semantics from template tables' || CHR(13)||CHR(10) ||
'3.2.1 (2025-11-24): MARS-1049 - Added pEncoding parameter support for CSV character set specification' || CHR(13)||CHR(10) ||
'3.2.0 (2025-10-22): Added package versioning system using centralized ENV_MANAGER functions' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-20): Enhanced PROCESS_SOURCE_FILE with 6-step validation workflow' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-15): Separated export procedures into dedicated DATA_EXPORTER package' || CHR(13)||CHR(10) ||
'2.5.0 (2025-10-10): Added DELETE_SOURCE_CASCADE for safe configuration removal' || CHR(13)||CHR(10) ||
'2.0.0 (2025-09-25): Added official path patterns support (INBOX 3-level, ODS 2-level, ARCHIVE 2-level)' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-01): Initial release with file processing and validation capabilities';
TYPE tSourceFileReceived IS RECORD
(
A_SOURCE_FILE_RECEIVED_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE,
A_SOURCE_FILE_CONFIG_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY%TYPE,
SOURCE_FILE_PREFIX_INBOX VARCHAR2(430),
SOURCE_FILE_PREFIX_ODS VARCHAR2(430),
SOURCE_FILE_PREFIX_QUARANTINE VARCHAR2(430),
SOURCE_FILE_PREFIX_ARCHIVE VARCHAR2(430),
SOURCE_FILE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME%TYPE,
RECEPTION_DATE CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE%TYPE,
PROCESSING_STATUS CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS%TYPE,
EXTERNAL_TABLE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME%TYPE
);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgSourceFileConfigKey PLS_INTEGER;
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_SOURCE_FILE_CONFIG
* @desc Get source file type by matching the source file name against source file type naming patterns
* or by specifying the id of a received source file.
* @example ...
* @ex_rslt "CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE"
**/
FUNCTION GET_SOURCE_FILE_CONFIG(pFileUri IN VARCHAR2 DEFAULT NULL
, pSourceFileReceivedKey IN NUMBER DEFAULT NULL
, pSourceFileConfigKey IN NUMBER DEFAULT NULL)
RETURN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a newly received source file in A_SOURCE_FILE_RECEIVED table.
* This overload automatically determines source file type from the file name.
* It returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2
)
RETURN PLS_INTEGER;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a new new source file in A_SOURCE_FILE_RECEIVED table based on pSourceFileReceivedName and pSourceFileConfig.
* Then it returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(
* pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv'
* ,pSourceFileConfig => ...A_SOURCE_FILE_CONFIG%ROWTYPE... );
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2,
pSourceFileConfig IN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE
)
RETURN PLS_INTEGER;
/**
* @name SET_SOURCE_FILE_RECEIVED_STATUS
* @desc Set status of file in A_SOURCE_FILE_RECEIVED table - PROCESSING_STATUS column
* based on A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY
* and provided value of pStatus parameter
* @example exec FILE_MANAGER.SET_SOURCE_FILE_RECEIVED_STATUS(pSourceFileReceivedKey => 377, pStatus => 'READY_FOR_INGESTION');
**/
PROCEDURE SET_SOURCE_FILE_RECEIVED_STATUS(
pSourceFileReceivedKey IN PLS_INTEGER,
pStatus IN VARCHAR2
);
/**
* @name GET_EXTERNAL_TABLE_COLUMNS
* @desc Function used to get string with all table columns definitions based on pTargetTableTemplate "TEMPLATE TABLE" name.
* It used for creating "EXTERNAL TABLE" using CREATE_EXTERNAL_TABLE procedure.
* @example select FILE_MANAGER.GET_EXTERNAL_TABLE_COLUMNS(pTargetTableTemplate => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER') from dual;
* @ex_rslt "A_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "A_WORKFLOW_HISTORY_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "REV_NUMBER" NUMBER(28,0),
* "REF_DATE" DATE,
* "FREE_TEXT" VARCHAR2(1000 CHAR),
* "MLF_BS_TOTAL" NUMBER(28,10),
* "DF_BS_TOTAL" NUMBER(28,10),
* "MLF_SF_TOTAL" NUMBER(28,10),
* "DF_SF_TOTAL" NUMBER(28,10)
**/
FUNCTION GET_EXTERNAL_TABLE_COLUMNS (
pTargetTableTemplate IN VARCHAR2
)
RETURN CLOB;
/**
* @name CREATE_EXTERNAL_TABLE
* @desc A wrapper procedure for DBMS_CLOUD.CREATE_EXTERNAL_TABLE which creates External Table
* MARS-1049: Added pEncoding parameter for CSV character set specification
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252')
* If provided, adds CHARACTERSET clause to external table definition
* @example
* begin
* FILE_MANAGER.CREATE_EXTERNAL_TABLE(
* pTableName => 'STANDING_FACILITIES_HEADER',
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER',
* pPrefix => 'ODS/LM/STANDING_FACILITIES_HEADER/',
* pBucketUri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/',
* pFileName => NULL,
* pDelimiter => ',',
* pEncoding => 'UTF8'
* );
* end;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pTableName IN VARCHAR2,
pTemplateTableName IN VARCHAR2,
pPrefix IN VARCHAR2,
pBucketUri IN VARCHAR2 DEFAULT ENV_MANAGER.gvInboxBucketUri,
pFileName IN VARCHAR2 DEFAULT NULL,
pDelimiter IN VARCHAR2 DEFAULT ',',
pEncoding IN VARCHAR2 DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name CREATE_EXTERNAL_TABLE
* @desc Creates External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.CREATE_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_SOURCE_FILE_RECEIVED
* @desc A wrapper procedure for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
* It validate External table build upon single file
* provided by pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.VALIDATE_SOURCE_FILE_RECEIVED(pSourceFileReceivedKey => 377);
**/
PROCEDURE VALIDATE_SOURCE_FILE_RECEIVED
(
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_EXTERNAL_TABLE
* @desc A wrapper function for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE.
* It validates External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt FAILED
**/
FUNCTION VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name S_VALIDATE_EXTERNAL_TABLE
* @desc A function which checks if SELECT query reterns any rows.
* It trys to selects External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.S_VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt PASSED
**/
FUNCTION S_VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name DROP_EXTERNAL_TABLE
* @desc It drops External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.DROP_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);
**/
PROCEDURE DROP_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name COPY_FILE
* @desc It copies file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS'
* @example exec FILE_MANAGER.COPY_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE COPY_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name MOVE_FILE
* @desc It moves file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS', 'QUARANTINE'
* @example exec FILE_MANAGER.MOVE_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE MOVE_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name DELETE_FOLDER_CONTENTS
* @desc It deletes all files from specified folder in the cloud storage.
* The procedure lists all objects in the specified folder prefix and deletes them one by one.
* pBucketArea parameter specifies which bucket to use: 'INBOX', 'DATA', 'ARCHIVE'
* pFolderPrefix parameter specifies the folder path within the bucket (e.g., 'C2D/UC_DISSEM/UC_NMA_DISSEM/')
* @example exec FILE_MANAGER.DELETE_FOLDER_CONTENTS(pBucketArea => 'INBOX', pFolderPrefix => 'C2D/UC_DISSEM/UC_NMA_DISSEM/');
**/
PROCEDURE DELETE_FOLDER_CONTENTS(
pBucketArea IN VARCHAR2,
pFolderPrefix IN VARCHAR2
);
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter.
* Ubmrella procedure that calls:
* - REGISTER_SOURCE_FILE_RECEIVED;
* - CREATE_EXTERNAL_TABLE;
* - VALIDATE_SOURCE_FILE_RECEIVED;
* - DROP_EXTERNAL_TABLE;
* - MOVE_FILE;
* @example exec FILE_MANAGER.PROCESS_SOURCE_FILE(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
**/
PROCEDURE PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
;
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter and return processing result value.
* It returns (success/failure) => 0 / -(value).
* Ubmrella function that calls PROCESS_SOURCE_FILE procedure.
* @example
* declare
* vResult PLS_INTEGER;
* begin
* vResult := CT_MRDS.FILE_MANAGER.PROCESS_SOURCE_FILE(PSOURCEFILERECEIVEDNAME => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* DBMS_OUTPUT.PUT_LINE('vResult = ' || vResult);
* end;
* @ex_rslt 0
* -20021
**/
FUNCTION PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
RETURN PLS_INTEGER;
/**
* @name GET_DATE_FORMAT
* @desc Returns date format for specified template table name and column name.
* Date is taken from configuration A_COLUMN_DATE_FORMAT table.
* @example select FILE_MANAGER.GET_DATE_FORMAT(
* pTemplateTableName => 'STANDING_FACILITIES_HEADER',
* pColumnName => 'SNAPSHOT_DATE')
* from dual;
* @ex_rslt DD/MM/YYYY HH24:MI:SS
**/
FUNCTION GET_DATE_FORMAT(
pTemplateTableName IN VARCHAR2,
pColumnName IN VARCHAR2
) RETURN VARCHAR2;
/**
* @name GENERATE_EXTERNAL_TABLE_PARAMS
* @desc It builds two strings: pColumnList and pFieldList for specified Template Table name, by parameter: pTemplateTableName.
* @example
* declare
* vColumnList CLOB;
* vFieldList CLOB;
* begin
* FILE_MANAGER.GENERATE_EXTERNAL_TABLE_PARAMS (
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER'
* ,pColumnList => vColumnList
* ,pFieldList => vFieldList
* );
* DBMS_OUTPUT.PUT_LINE('vColumnList = '||vColumnList);
* DBMS_OUTPUT.PUT_LINE('vFieldList = '||vFieldList);
* end;
* /
**/
PROCEDURE GENERATE_EXTERNAL_TABLE_PARAMS (
pTemplateTableName IN VARCHAR2,
pColumnList OUT CLOB,
pFieldList OUT CLOB
);
/**
* @name ADD_SOURCE
* @desc Insert a new record to A_SOURCE table.
* pSourceKey is a PRIMARY KEY value.
**/
PROCEDURE ADD_SOURCE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE,
pSourceName IN CT_MRDS.A_SOURCE.SOURCE_NAME%TYPE
);
/**
* @name DELETE_SOURCE_CASCADE
* @desc Safely deletes a SOURCE specified by pSourceKey parameter from A_SOURCE table and all dependent tables:
* - A_SOURCE_FILE_CONFIG
* - A_SOURCE_FILE_RECEIVED
* - A_COLUMN_DATE_FORMAT (only if template table is not shared with other source systems)
* The procedure checks if template tables are shared before deleting date format configurations.
* If a template table is used by multiple source systems, date formats are preserved.
* @example CALL CT_MRDS.FILE_MANAGER.DELETE_SOURCE_CASCADE(pSourceKey => 'TEST_SYS');
**/
PROCEDURE DELETE_SOURCE_CASCADE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE
);
/**
* @name GET_CONTAINER_SOURCE_FILE_CONFIG_KEY
* @desc For specified parameter pSourceFileId (A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID)
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY for related CONTAINER record.
* @example select FILE_MANAGER.GET_CONTAINER_SOURCE_FILE_CONFIG_KEY(
* pSourceFileId => 'UC_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_CONTAINER_SOURCE_FILE_CONFIG_KEY (
pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name GET_SOURCE_FILE_CONFIG_KEY
* @desc For specified input parameters,
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY.
* @example select FILE_MANAGER.GET_SOURCE_FILE_CONFIG_KEY (
* pSourceFileType => 'INPUT'
* ,pSourceFileId => 'UC_DISSEM'
* ,pTableId => 'UC_NMA_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_SOURCE_FILE_CONFIG_KEY (
pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE DEFAULT 'INPUT'
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name ADD_SOURCE_FILE_CONFIG
* @desc Insert a new record to A_SOURCE_FILE_CONFIG table.
* MARS-1049: Added pEncoding parameter for CSV character set specification.
* MARS-1409: Added pIsWorkflowSuccessRequired parameter.
* MARS-828: Added pIsArchiveEnabled, pIsKeptInTrash, pArchivalStrategy, pMinimumAgeMonths.
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252', 'EE8ISO8859P2')
* If NULL, no CHARACTERSET clause is added to external table definitions
* @param pIsWorkflowSuccessRequired - 'Y' (default) = archivization requires WORKFLOW_SUCCESSFUL='Y' (standard DBT flow)
* 'N' = archive regardless of workflow status (bypass for manual/non-DBT sources)
* @param pIsArchiveEnabled - 'Y' = enable automatic archivization for this config; 'N' (default) = disabled
* @param pIsKeptInTrash - 'Y' = move files to trash before purge; 'N' (default) = purge directly
* @param pArchivalStrategy - Archival strategy: 'MINIMUM_AGE_MONTHS' or NULL
* @param pMinimumAgeMonths - Minimum age in months before file eligible for archivization (used with MINIMUM_AGE_MONTHS strategy)
* @example CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
* pSourceKey => 'C2D', pSourceFileType => 'INPUT',
* pSourceFileId => 'UC_DISSEM', pTableId => 'METADATA_LOADS',
* pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
* pEncoding => 'UTF8', pIsWorkflowSuccessRequired => 'Y',
* pIsArchiveEnabled => 'Y', pIsKeptInTrash => 'N',
* pArchivalStrategy => 'MINIMUM_AGE_MONTHS', pMinimumAgeMonths => 3
* );
**/
PROCEDURE ADD_SOURCE_FILE_CONFIG (
pSourceKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY%TYPE
,pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pSourceFileDesc IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC%TYPE
,pSourceFileNamePattern IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049
,pIsWorkflowSuccessRequired IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_WORKFLOW_SUCCESS_REQUIRED%TYPE DEFAULT 'Y' -- MARS-1409
,pIsArchiveEnabled IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED%TYPE DEFAULT 'N' -- MARS-828
,pIsKeptInTrash IN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEPT_IN_TRASH%TYPE DEFAULT 'Y' -- MARS-828
,pArchivalStrategy IN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY%TYPE DEFAULT 'THRESHOLD_BASED' -- MARS-828
,pMinimumAgeMonths IN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS%TYPE DEFAULT 0 -- MARS-828
);
/**
* @name ADD_COLUMN_DATE_FORMAT
* @desc Insert a new record to A_COLUMN_DATE_FORMAT table.
**/
PROCEDURE ADD_COLUMN_DATE_FORMAT (
pTemplateTableName IN CT_MRDS.A_COLUMN_DATE_FORMAT.TEMPLATE_TABLE_NAME%TYPE
,pColumnName IN CT_MRDS.A_COLUMN_DATE_FORMAT.COLUMN_NAME%TYPE
,pDateFormat IN CT_MRDS.A_COLUMN_DATE_FORMAT.DATE_FORMAT%TYPE
);
/**
* @name GET_BUCKET_URI
* @desc Function used to get string with bucket http url.
* Possible input values for pBucketArea are: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example select FILE_MANAGER.GET_BUCKET_URI(pBucketArea => 'ODS') from dual;
* @ex_rslt https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/
**/
FUNCTION GET_BUCKET_URI(pBucketArea VARCHAR2)
RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_CONFIG_INFO
* @desc Function returns details about A_SOURCE_FILE_CONFIG record
* for specified pSourceFileConfigKey (A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY).
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_CONFIG_INFO (
* pSourceFileConfigKey => 128
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
* @ex_rslt
* Details about File Configuration:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 128
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Details about related Container Config:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 126
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Column Date Format config entries:
* --------------------------------
* TEMPLATE_TABLE_NAME = CT_ET_TEMPLATES.C2D_UC_MA_DISSEM
* ...
* --------------------------------
**/
FUNCTION GET_DET_SOURCE_FILE_CONFIG_INFO (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_RECEIVED_INFO
* @desc Function returns details about A_SOURCE_FILE_RECEIVED record
* for specified pSourceFileReceivedKey (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY).
* If pIncludeConfigInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_RECEIVED_INFO (
* pSourceFileReceivedKey => 377
* ,pIncludeConfigInfo => 1
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
*
**/
FUNCTION GET_DET_SOURCE_FILE_RECEIVED_INFO (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE
,pIncludeConfigInfo IN PLS_INTEGER DEFAULT 1
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_USER_LOAD_OPERATIONS
* @desc Function returns details from USER_LOAD_OPERATIONS table
* for specified pOperationId.
* @example select FILE_MANAGER.GET_DET_USER_LOAD_OPERATIONS (pOperationId => 3608) from dual;
* @ex_rslt
* Details about USER_LOAD_OPERATIONS where ID = 3608
* --------------------------------
* ID = 3608
* TYPE = VALIDATE
* SID = 31260
* SERIAL# = 52915
* START_TIME = 2025-05-20 10.08.24.436983 EUROPE/BELGRADE
* UPDATE_TIME = 2025-05-20 10.08.24.458643 EUROPE/BELGRADE
* STATUS = FAILED
* OWNER_NAME = CT_MRDS
* TABLE_NAME = STANDING_FACILITIES_HEADER
* PARTITION_NAME =
* SUBPARTITION_NAME =
* FILE_URI_LIST =
* ROWS_LOADED =
* LOGFILE_TABLE = VALIDATE$3608_LOG
* BADFILE_TABLE = VALIDATE$3608_BAD
* STATUS_TABLE =
* TEMPEXT_TABLE =
* CREDENTIAL_NAME =
* EXPIRATION_TIME = 2025-05-22 10.08.24.436983000 EUROPE/BELGRADE
* --------------------------------
**/
FUNCTION GET_DET_USER_LOAD_OPERATIONS (
pOperationId PLS_INTEGER
) RETURN VARCHAR2;
/**
* @name ANALYZE_VALIDATION_ERRORS
* @desc Wrapper function that analyzes validation errors for a source file using its received key.
* Automatically derives template schema, table name, CSV URI and validation log table
* from file metadata and calls ENV_MANAGER.ANALYZE_VALIDATION_ERRORS.
* @example SELECT FILE_MANAGER.ANALYZE_VALIDATION_ERRORS(63) FROM DUAL;
* @ex_rslt Detailed validation analysis report with column mismatches and solutions
**/
FUNCTION ANALYZE_VALIDATION_ERRORS(
pSourceFileReceivedKey IN NUMBER
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the FILE_MANAGER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT FILE_MANAGER.GET_VERSION() FROM DUAL;
* @ex_rslt 3.2.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Uses centralized ENV_MANAGER.GET_PACKAGE_VERSION_INFO function.
* @example SELECT FILE_MANAGER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: FILE_MANAGER
* Version: 3.2.0
* Build Date: 2025-10-22 16:30:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Uses centralized ENV_MANAGER.FORMAT_VERSION_HISTORY function.
* @example SELECT FILE_MANAGER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt FILE_MANAGER Version History:
* 3.2.0 (2025-10-22): Added package versioning system...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/
/

View File

@@ -0,0 +1,141 @@
-- ============================================================================
-- MARS-1409 Master Rollback Script
-- ============================================================================
-- Author: Grzegorz Michalski
-- Purpose: Rollback A_WORKFLOW_HISTORY_KEY column changes from A_SOURCE_FILE_RECEIVED
-- Target Schema: CT_MRDS
-- Estimated Time: 1-2 minutes
-- Prerequisites: Backup of current FILE_MANAGER package, ADMIN privileges
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK ON
SET ECHO OFF
-- Create log directory if it doesn't exist
host mkdir log 2>nul
-- Generate dynamic SPOOL filename with timestamp
var filename VARCHAR2(100)
BEGIN
:filename := 'log/ROLLBACK_MARS_1409_' || SYS_CONTEXT('USERENV', 'CON_NAME') || '_' || TO_CHAR(SYSDATE,'YYYYMMDD_HH24MISS') || '.log';
END;
/
column filename new_value _filename
select :filename filename from dual;
spool &_filename
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409 Rollback Starting
PROMPT ============================================================================
PROMPT Package: CT_MRDS.FILE_MANAGER
PROMPT Change: Remove A_WORKFLOW_HISTORY_KEY column and restore previous version
PROMPT Steps: 13 (Drop tables/columns first, then Restore ENV_MANAGER, FILE_MANAGER, DATA_EXPORTER, FILE_ARCHIVER (dependency order), Restore trigger, Verify)
PROMPT Timestamp:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_start FROM DUAL;
PROMPT ============================================================================
-- Confirm rollback with user
ACCEPT continue CHAR PROMPT 'Type YES to continue with rollback, or Ctrl+C to abort: '
WHENEVER SQLERROR EXIT SQL.SQLCODE
BEGIN
IF '&continue' IS NULL OR TRIM('&continue') IS NULL OR UPPER(TRIM('&continue')) != 'YES' THEN
RAISE_APPLICATION_ERROR(-20000, 'Rollback aborted by user');
END IF;
END;
/
PROMPT
PROMPT ============================================================================
PROMPT STEP 1: Drop A_TABLE_STAT, A_TABLE_STAT_HIST and IS_WORKFLOW_SUCCESS_REQUIRED column
PROMPT (must be done BEFORE compiling rollback packages so column names match)
PROMPT ============================================================================
@@98_MARS_1409_rollback_archival_strategy_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 2: Drop A_WORKFLOW_HISTORY_KEY column from A_SOURCE_FILE_RECEIVED
PROMPT ============================================================================
@@99_MARS_1409_rollback_workflow_history_key_column.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 3: Restore ENV_MANAGER package specification (previous version)
PROMPT ============================================================================
@@95_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 4: Restore ENV_MANAGER package body (previous version)
PROMPT ============================================================================
@@96_MARS_1409_rollback_CT_MRDS_ENV_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 5: Restore FILE_MANAGER package specification (previous version)
PROMPT ============================================================================
@@93_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 6: Restore FILE_MANAGER package body (previous version)
PROMPT ============================================================================
@@94_MARS_1409_rollback_CT_MRDS_FILE_MANAGER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 7: Restore DATA_EXPORTER package specification (previous version)
PROMPT ============================================================================
@@83_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 8: Restore DATA_EXPORTER package body (previous version)
PROMPT ============================================================================
@@84_MARS_1409_rollback_CT_MRDS_DATA_EXPORTER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 9: Restore FILE_ARCHIVER package specification (previous version)
PROMPT ============================================================================
@@91_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_SPEC.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 10: Restore FILE_ARCHIVER package body (previous version)
PROMPT ============================================================================
@@92_MARS_1409_rollback_CT_MRDS_FILE_ARCHIVER_BODY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 11: Restore A_WORKFLOW_HISTORY trigger (previous version)
PROMPT ============================================================================
@@97_MARS_1409_rollback_CT_MRDS_A_WORKFLOW_HISTORY.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 12: Verify rollback
PROMPT ============================================================================
@@90_MARS_1409_verify_rollback.sql
PROMPT
PROMPT ============================================================================
PROMPT STEP 13: Verify package versions
PROMPT ============================================================================
@@verify_packages_version.sql
PROMPT
PROMPT ============================================================================
PROMPT MARS-1409 Rollback Complete
PROMPT ============================================================================
PROMPT Final Status:
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') AS rollback_end FROM DUAL;
PROMPT
PROMPT Review the log file for detailed results: &_filename
PROMPT ============================================================================
spool off
quit;

View File

@@ -0,0 +1,101 @@
-- ====================================================================
-- A_SOURCE_FILE_CONFIG Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store source file configuration and processing rules
-- MARS-1049: Added ENCODING column for CSV character set support
-- MARS-828: Added ARCHIVAL_STRATEGY and MINIMUM_AGE_MONTHS for archival automation
-- NOTE: IS_WORKFLOW_SUCCESS_REQUIRED column NOT included (added by MARS-1409)
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_CONFIG (
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_KEY VARCHAR2(30) NOT NULL ENABLE,
SOURCE_FILE_TYPE VARCHAR2(200), -- Can be 'INPUT' or 'CONTAINER' or 'LOAD_CONFIG'
SOURCE_FILE_ID VARCHAR2(200),
SOURCE_FILE_DESC VARCHAR2(2000),
SOURCE_FILE_NAME_PATTERN VARCHAR2(200),
TABLE_ID VARCHAR2(200),
TEMPLATE_TABLE_NAME VARCHAR2(200),
CONTAINER_FILE_KEY NUMBER(38,0),
ARCHIVE_THRESHOLD_DAYS NUMBER(4,0),
ARCHIVE_THRESHOLD_FILES_COUNT NUMBER(38,0),
ARCHIVE_THRESHOLD_BYTES_SUM NUMBER(38,0),
ODS_SCHEMA_NAME VARCHAR2(100),
ARCHIVE_THRESHOLD_ROWS_COUNT NUMBER(38,0),
HOURS_TO_EXPIRE_STATISTICS NUMBER(38,3),
ARCHIVAL_STRATEGY VARCHAR2(50),
MINIMUM_AGE_MONTHS NUMBER(3,0),
ENCODING VARCHAR2(50) DEFAULT 'UTF8',
IS_ARCHIVE_ENABLED CHAR(1) DEFAULT 'N' NOT NULL,
IS_KEEP_IN_TRASH CHAR(1) DEFAULT 'N' NOT NULL,
CONSTRAINT A_SOURCE_FILE_CONFIG_PK PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT CHK_IS_ARCHIVE_ENABLED CHECK (IS_ARCHIVE_ENABLED IN ('Y', 'N')),
CONSTRAINT CHK_IS_KEEP_IN_TRASH CHECK (IS_KEEP_IN_TRASH IN ('Y', 'N')),
CONSTRAINT SOURCE_FILE_TYPE_CHK CHECK (SOURCE_FILE_TYPE IN ('INPUT', 'CONTAINER', 'LOAD_CONFIG')),
CONSTRAINT ASFC_A_SOURCE_KEY_FK FOREIGN KEY(A_SOURCE_KEY) REFERENCES CT_MRDS.A_SOURCE(A_SOURCE_KEY),
CONSTRAINT ASFC_CONTAINER_FILE_KEY_FK FOREIGN KEY(CONTAINER_FILE_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_CONFIG_UQ1 UNIQUE(SOURCE_FILE_TYPE, SOURCE_FILE_ID, TABLE_ID)
) TABLESPACE "DATA";
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY IS
'Primary key - unique identifier for source file configuration record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY IS
'Foreign key to A_SOURCE table - identifies the source system (e.g., LM, C2D, CSDB)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE IS
'Type of file configuration: INPUT (data files), CONTAINER (xml files), or LOAD_CONFIG (configuration files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID IS
'Unique identifier for the source file within the source system (e.g., UC_DISSEM, STANDING_FACILITIES)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC IS
'Human-readable description of the source file and its purpose';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN IS
'Filename pattern for matching incoming files (supports wildcards, e.g., UC_NMA_DISSEM-*.csv)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID IS
'Identifier for the target table where data will be loaded (without schema prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME IS
'Fully qualified name of template table in CT_ET_TEMPLATES schema used for external table creation';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY IS
'Foreign key to parent container configuration when this file is part of an xml (NULL for standalone files)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_DAYS IS
'Threshold for THRESHOLD_BASED strategy: archive data older than N days';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_FILES_COUNT IS
'Trigger archival when file count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_BYTES_SUM IS
'Trigger archival when total size in bytes exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVE_THRESHOLD_ROWS_COUNT IS
'Trigger archival when total row count exceeds this threshold (used in THRESHOLD_BASED and HYBRID strategies)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ODS_SCHEMA_NAME IS
'Schema name where ODS external tables are created (typically ODS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.HOURS_TO_EXPIRE_STATISTICS IS
'Number of hours before table statistics expire and need to be recalculated';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ARCHIVAL_STRATEGY IS
'Archival strategy: THRESHOLD_BASED (days-based), MINIMUM_AGE_MONTHS (0=current month, N=retain N months), HYBRID (combination)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.MINIMUM_AGE_MONTHS IS
'Minimum age in months before archival (required for MINIMUM_AGE_MONTHS and HYBRID strategies, 0=current month only)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING IS
'Oracle character set name for CSV files (e.g., UTF8, WE8MSWIN1252, EE8ISO8859P2)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_ARCHIVE_ENABLED IS
'Y=Enable archiving, N=Skip archiving. Controls if table participates in archival process';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH IS
'Y=Keep files in TRASH after archiving, N=Delete immediately. Controls TRASH retention policy';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_CONFIG TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,70 @@
-- ====================================================================
-- A_SOURCE_FILE_RECEIVED Table
-- ====================================================================
-- Purpose: Track received files and their processing status
-- ====================================================================
CREATE TABLE CT_MRDS.A_SOURCE_FILE_RECEIVED (
A_SOURCE_FILE_RECEIVED_KEY NUMBER(38,0) NOT NULL ENABLE,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL ENABLE,
SOURCE_FILE_NAME VARCHAR2(1000) NOT NULL,
CHECKSUM VARCHAR2(128),
CREATED TIMESTAMP(6) WITH TIME ZONE,
BYTES NUMBER,
RECEPTION_DATE DATE NOT NULL,
PROCESSING_STATUS VARCHAR2(200),
EXTERNAL_TABLE_NAME VARCHAR2(200),
PARTITION_YEAR VARCHAR2(4),
PARTITION_MONTH VARCHAR2(2),
ARCH_PATH VARCHAR2(1000),
PROCESS_NAME VARCHAR2(200),
CONSTRAINT A_SOURCE_FILE_RECEIVED_PK PRIMARY KEY (A_SOURCE_FILE_RECEIVED_KEY),
CONSTRAINT ASFR_A_SOURCE_FILE_CONFIG_KEY_FK FOREIGN KEY(A_SOURCE_FILE_CONFIG_KEY) REFERENCES CT_MRDS.A_SOURCE_FILE_CONFIG(A_SOURCE_FILE_CONFIG_KEY),
CONSTRAINT A_SOURCE_FILE_RECEIVED_CHK CHECK (PROCESSING_STATUS IN ('RECEIVED', 'VALIDATION_FAILED', 'VALIDATED', 'READY_FOR_INGESTION', 'INGESTED', 'ARCHIVED', 'ARCHIVED_AND_TRASHED', 'ARCHIVED_AND_PURGED'))
) TABLESPACE "DATA";
-- Unique index for file identification (workaround for TIMESTAMP WITH TIMEZONE constraint limitation)
CREATE UNIQUE INDEX CT_MRDS.A_SOURCE_FILE_RECEIVED_UK1
ON CT_MRDS.A_SOURCE_FILE_RECEIVED(CHECKSUM, CREATED, BYTES);
-- Column comments
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY IS
'Primary key - unique identifier for received file record';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY IS
'Foreign key to A_SOURCE_FILE_CONFIG - links file to its configuration and processing rules';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME IS
'Full object name/path of the received file in OCI Object Storage (includes INBOX prefix)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CHECKSUM IS
'MD5 checksum of file content for integrity verification and duplicate detection';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.CREATED IS
'Timestamp with timezone when file was created/uploaded to Object Storage (from DBMS_CLOUD.LIST_OBJECTS)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.BYTES IS
'File size in bytes';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE IS
'Date when file was registered in the system (extracted from CREATED timestamp)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS IS
'Current processing status: RECEIVED → VALIDATED (or VALIDATION_FAILED if errors) → READY_FOR_INGESTION → INGESTED → ARCHIVED → ARCHIVED_AND_TRASHED → ARCHIVED_AND_PURGED';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME IS
'Name of temporary external table created for file validation (dropped after validation)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_YEAR IS
'Year partition value (YYYY format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PARTITION_MONTH IS
'Month partition value (MM format) when file was archived to ARCHIVE bucket with Hive-style partitioning';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.ARCH_PATH IS
'Archive directory prefix in ARCHIVE bucket containing archived Parquet files (supports multiple files from parallel DBMS_CLOUD.EXPORT_DATA)';
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESS_NAME IS
'Name of the process or DBT model that ingested this file (populated during ingestion workflow)';
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_SOURCE_FILE_RECEIVED TO MRDS_LOADER_ROLE;

View File

@@ -0,0 +1,26 @@
-- ====================================================================
-- A_TABLE_STAT Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store current table statistics and archival thresholds
-- NOTE: This is the pre-MARS-1409 structure without:
-- ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED,
-- WORKFLOW_SUCCESS_FILE_COUNT, WORKFLOW_SUCCESS_ROW_COUNT, WORKFLOW_SUCCESS_TOTAL_SIZE
-- Column names: SIZE (not TOTAL_SIZE), OVER_ARCH_THRESOLD_SIZE (not OVER_ARCH_THRESOLD_TOTAL_SIZE)
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT (
A_TABLE_STAT_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCH_THRESHOLD_DAYS NUMBER(4,0),
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
"SIZE" NUMBER(38,0),
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0),
CONSTRAINT A_TABLE_STAT_UK1 UNIQUE(A_SOURCE_FILE_CONFIG_KEY)
) TABLESPACE "DATA";
-- Note: A_TABLE_STAT_UK1 index is auto-created by the UNIQUE constraint definition above.

View File

@@ -0,0 +1,23 @@
-- ====================================================================
-- A_TABLE_STAT_HIST Table (rollback_version - pre MARS-1409)
-- ====================================================================
-- Purpose: Store historical table statistics for trend analysis
-- NOTE: This is the pre-MARS-1409 structure without:
-- ARCHIVAL_STRATEGY, ARCH_MINIMUM_AGE_MONTHS, IS_WORKFLOW_SUCCESS_REQUIRED,
-- WORKFLOW_SUCCESS_FILE_COUNT, WORKFLOW_SUCCESS_ROW_COUNT, WORKFLOW_SUCCESS_TOTAL_SIZE
-- Column names: SIZE (not TOTAL_SIZE), OVER_ARCH_THRESOLD_SIZE (not OVER_ARCH_THRESOLD_TOTAL_SIZE)
-- ====================================================================
CREATE TABLE CT_MRDS.A_TABLE_STAT_HIST (
A_TABLE_STAT_HIST_KEY NUMBER(38,0) PRIMARY KEY,
A_SOURCE_FILE_CONFIG_KEY NUMBER(38,0) NOT NULL,
TABLE_NAME VARCHAR2(200) NOT NULL,
CREATED TIMESTAMP(6) DEFAULT SYSTIMESTAMP,
ARCH_THRESHOLD_DAYS NUMBER(4,0),
FILE_COUNT NUMBER(38,0),
ROW_COUNT NUMBER(38,0),
"SIZE" NUMBER(38,0),
OVER_ARCH_THRESOLD_FILE_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_ROW_COUNT NUMBER(38,0),
OVER_ARCH_THRESOLD_SIZE NUMBER(38,0)
) TABLESPACE "DATA";

View File

@@ -0,0 +1,48 @@
WHENEVER SQLERROR CONTINUE
GRANT SELECT, INSERT, UPDATE, DELETE ON ct_ods.a_load_history TO ct_mrds;
WHENEVER SQLERROR EXIT SQL.SQLCODE
create or replace TRIGGER ct_mrds.a_workflow_history
AFTER INSERT OR UPDATE OF workflow_successful ON ct_mrds.a_workflow_history
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
DECLARE
v_workflow_name VARCHAR2(128);
v_wla_id NUMBER;
BEGIN
IF :new.service_name = 'ODS' AND :new.workflow_name IN (
'w_ODS_LM_STANDING_FACILITIES', 'w_ODS_CSDB_DEBT', 'w_ODS_CSDB_DEBT_DAILY', 'w_ODS_CSDB_RATINGS_FULL',
'w_ODS_TMS_LIMIT_ACCESS', 'w_ODS_TMS_PORTFOLIO_ACCESS', 'w_ODS_TMS_PORTFOLIO_TREE',
'w_ODS_TMS_COLLATERAL_INVENTORY', 'w_ODS_TOP_FULLBIDARRAY_COMPILED', 'w_ODS_TOP_ANNOUNCEMENT',
'w_ODS_TOP_ALLOTMENT_MODIFICATIONS', 'w_ODS_TOP_ALLOTMENT', 'w_ODS_CEPH_PRICING', 'w_ODS_C2D_MPEC'
) THEN
IF :new.workflow_successful = 'Y' AND :new.workflow_successful <> NVL(:old.workflow_successful, 'N') THEN
CASE
WHEN :new.workflow_name = 'w_ODS_LM_STANDING_FACILITIES' THEN v_workflow_name := 'w_ODS_LM_STANDING_FACILITY';
WHEN :new.workflow_name = 'w_ODS_TMS_LIMIT_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_LIMITACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_ACCESS' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOACCESS';
WHEN :new.workflow_name = 'w_ODS_TMS_PORTFOLIO_TREE' THEN v_workflow_name := 'w_ODS_TMS_RAR_PORTFOLIOTREE';
WHEN :new.workflow_name = 'w_ODS_TMS_COLLATERAL_INVENTORY' THEN v_workflow_name := 'w_ODS_TMS_RAR_RARCOLLATERALINVENTORY';
WHEN :new.workflow_name = 'w_ODS_TOP_FULLBIDARRAY_COMPILED' THEN v_workflow_name := 'w_ODS_TOP_FULLBIDARRAY_COMPILED';
WHEN :new.workflow_name = 'w_ODS_TOP_ANNOUNCEMENT' THEN v_workflow_name := 'w_ODS_TOP_ANNOUNCEMENT';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT_MODIFICATIONS';
WHEN :new.workflow_name = 'w_ODS_TOP_ALLOTMENT' THEN v_workflow_name := 'w_ODS_TOP_ALLOTMENT';
WHEN :new.workflow_name = 'w_ODS_CEPH_PRICING' THEN v_workflow_name := 'w_ODS_CEPH_PRICING';
WHEN :new.workflow_name = 'w_ODS_C2D_MPEC' THEN v_workflow_name := 'w_ODS_C2D_MPEC';
ELSE
v_workflow_name := :new.workflow_name;
END CASE;
BEGIN
v_wla_id := TO_NUMBER(:new.orchestration_run_id);
EXCEPTION WHEN OTHERS THEN NULL;
END;
INSERT INTO ct_ods.a_load_history (
a_etl_load_set_key, workflow_name, infa_run_id, load_start, load_end, exdi_appl_req_id, exdi_correlation_id, load_successful, wla_run_id, dq_flag
) VALUES (
:new.a_workflow_history_key, v_workflow_name, NULL, :new.workflow_start, :new.workflow_end, NULL, NULL, :new.workflow_successful, v_wla_id, 'F'
);
END IF;
END IF;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
create or replace PACKAGE CT_MRDS.DATA_EXPORTER
AUTHID CURRENT_USER
AS
/**
* Data Export Package: Provides comprehensive data export capabilities to various formats (CSV, Parquet)
* with support for cloud storage integration via Oracle Cloud Infrastructure (OCI).
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Package Version Information
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '2.17.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(19) := '2026-03-11 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(50) := 'MRDS Development Team';
-- Version History (last 3-5 changes)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'v2.17.0 (2026-03-11): PARQUET FIX - Added pFormat parameter to buildQueryWithDateFormats. REPLACE(col,CHR(34)) now applied only when pFormat=CSV. EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being corrupted (single " doubled to ""). Parquet is binary and needs no quote escaping.' || CHR(10) ||
'v2.16.0 (2026-03-11): RFC 4180 FIX - Added REPLACE(col,CHR(34),CHR(34)||CHR(34)) in buildQueryWithDateFormats for VARCHAR2/CHAR/CLOB. Pre-doubled values produce compliant CSV for ORACLE_LOADER OPTIONALLY ENCLOSED BY chr(34).' || CHR(10) ||
'v2.6.3 (2026-01-28): COMPILATION FIX - Resolved ORA-00904 error in EXPORT_PARTITION_PARALLEL. SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE cannot be used directly in SQL UPDATE statements. Now properly assigned to vgMsgTmp variable before UPDATE.' || CHR(10) ||
'v2.6.2 (2026-01-28): CRITICAL FIX - Race condition when multiple exports run simultaneously. Changed DELETE to filter by age (>24h) instead of deleting all COMPLETED chunks. Prevents concurrent sessions from deleting each other chunks. Session-safe cleanup with TASK_NAME filtering. Enables true parallel execution of multiple export jobs.' || CHR(10) ||
'v2.6.1 (2026-01-28): Added DELETE_FAILED_EXPORT_FILE procedure to clean up partial/corrupted files before retry. When partition fails mid-export, partial file is deleted before retry to prevent Oracle from creating _1 suffixed duplicates. Ensures clean retry without orphaned files in OCI bucket.' || CHR(10) ||
'v2.6.0 (2026-01-28): CRITICAL FIX - Added STATUS tracking to A_PARALLEL_EXPORT_CHUNKS table to prevent data duplication on retry. System now restarts ONLY failed partitions instead of re-exporting all data. Added ERROR_MESSAGE and EXPORT_TIMESTAMP columns for better error handling and monitoring. Prevents duplicate file creation when parallel tasks fail (e.g., 22 partitions with 16 threads, 3 failures no longer duplicates 19 successful exports).' || CHR(10) ||
'v2.5.0 (2026-01-26): Added recorddelimiter parameter with CRLF (CHR(13)||CHR(10)) for CSV exports to ensure Windows-compatible line endings. Improves cross-platform compatibility when CSV files are opened in Windows applications (Notepad, Excel).' || CHR(10) ||
'v2.4.0 (2026-01-11): Added pTemplateTableName parameter for per-column date format configuration. Implements dynamic query building with TO_CHAR for each date/timestamp column using FILE_MANAGER.GET_DATE_FORMAT. Supports 3-tier hierarchy: column-specific, template DEFAULT, global fallback. Eliminates single dateformat limitation of DBMS_CLOUD.EXPORT_DATA.' || CHR(10) ||
'v2.3.0 (2025-12-20): Added parallel partition processing using DBMS_PARALLEL_EXECUTE. New pParallelDegree parameter (1-16, default 1) for EXPORT_TABLE_DATA_BY_DATE and EXPORT_TABLE_DATA_TO_CSV_BY_DATE procedures. Each year/month partition processed in separate thread for improved performance.' || CHR(10) ||
'v2.2.0 (2025-12-19): DRY refactoring - extracted shared helper functions (sanitizeFilename, VALIDATE_TABLE_AND_COLUMNS, GET_PARTITIONS, EXPORT_SINGLE_PARTITION worker procedure). Reduced code duplication by ~400 lines. Prepared architecture for v2.3.0 parallel processing.' || CHR(10) ||
'v2.1.1 (2025-12-04): Fixed JOIN column reference A_WORKFLOW_HISTORY_KEY -> A_ETL_LOAD_SET_KEY, added consistent column mapping and dynamic column list to EXPORT_TABLE_DATA procedure, enhanced DEBUG logging for all export operations' || CHR(10) ||
'v2.1.0 (2025-10-22): Added version tracking and PARTITION_YEAR/PARTITION_MONTH support' || CHR(10) ||
'v2.0.0 (2025-10-01): Separated export functionality from FILE_MANAGER package' || CHR(10);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
-- TYPE DEFINITIONS FOR PARTITION HANDLING
---------------------------------------------------------------------------------------------------------------------------
/**
* Record type for year/month partition information
**/
TYPE partition_rec IS RECORD (
year VARCHAR2(4),
month VARCHAR2(2)
);
/**
* Table type for collection of partition records
**/
TYPE partition_tab IS TABLE OF partition_rec;
---------------------------------------------------------------------------------------------------------------------------
-- INTERNAL PARALLEL PROCESSING CALLBACK
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_PARTITION_PARALLEL
* @desc Internal callback procedure for DBMS_PARALLEL_EXECUTE.
* Processes single partition (year/month) chunk in parallel task.
* Called by DBMS_PARALLEL_EXECUTE framework for each chunk.
* This procedure is PUBLIC because DBMS_PARALLEL_EXECUTE requires it,
* but should NOT be called directly by external code.
* @param pStartId - Chunk start ID (CHUNK_ID from A_PARALLEL_EXPORT_CHUNKS table)
* @param pEndId - Chunk end ID (same as pStartId for single-row chunks)
**/
PROCEDURE EXPORT_PARTITION_PARALLEL (
pStartId IN NUMBER,
pEndId IN NUMBER
);
---------------------------------------------------------------------------------------------------------------------------
-- MAIN EXPORT PROCEDURES
---------------------------------------------------------------------------------------------------------------------------
/**
* @name EXPORT_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into CSV file on OCI infrustructure.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'csv_exports'
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_BY_DATE
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data into PARQUET files on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* Allows specifying custom column list or uses T.* if pColumnList is NULL.
* Validates that all columns in pColumnList exist in the target table.
* Automatically adds 'T.' prefix to column names in pColumnList.
* Supports parallel partition processing via pParallelDegree parameter (default 1, range 1-16).
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example
* begin
* DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'parquet_exports',
* pColumnList => 'COLUMN1, COLUMN2, COLUMN3', -- Optional
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
/**
* @name EXPORT_TABLE_DATA_TO_CSV_BY_DATE
* @desc Exports data to separate CSV files partitioned by year and month.
* Creates one CSV file for each year/month combination found in the data.
* Uses the same date filtering mechanism with CT_ODS.A_LOAD_HISTORY as EXPORT_TABLE_DATA_BY_DATE,
* but exports to CSV format instead of Parquet.
* Supports parallel partition processing via pParallelDegree parameter (1-16).
* File naming pattern: {pFileName}_YYYYMM.csv or {TABLENAME}_YYYYMM.csv (if pFileName is NULL)
* @example
* begin
* -- With custom filename
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'CT_MRDS',
* pTableName => 'MY_TABLE',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'DATA',
* pFolderName => 'exports',
* pFileName => 'my_export.csv',
* pMinDate => DATE '2024-01-01',
* pMaxDate => SYSDATE,
* pParallelDegree => 8 -- Optional, default 1, range 1-16
* );
*
* -- With auto-generated filename (based on table name only)
* DATA_EXPORTER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
* pSchemaName => 'OU_TOP',
* pTableName => 'AGGREGATED_ALLOTMENT',
* pKeyColumnName => 'A_ETL_LOAD_SET_KEY_FK',
* pBucketArea => 'ARCHIVE',
* pFolderName => 'exports',
* pMinDate => DATE '2025-09-01',
* pMaxDate => DATE '2025-09-17'
* );
* -- This will create files like: AGGREGATED_ALLOTMENT_202509.csv, etc.
* pBucketArea parameter accepts: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* end;
**/
PROCEDURE EXPORT_TABLE_DATA_TO_CSV_BY_DATE (
pSchemaName IN VARCHAR2,
pTableName IN VARCHAR2,
pKeyColumnName IN VARCHAR2,
pBucketArea IN VARCHAR2,
pFolderName IN VARCHAR2,
pFileName IN VARCHAR2 DEFAULT NULL,
pColumnList IN VARCHAR2 default NULL,
pMinDate IN DATE default DATE '1900-01-01',
pMaxDate IN DATE default SYSDATE,
pParallelDegree IN NUMBER default 1,
pTemplateTableName IN VARCHAR2 default NULL,
pMaxFileSize IN NUMBER default 104857600,
pCredentialName IN VARCHAR2 default ENV_MANAGER.gvCredentialName
);
---------------------------------------------------------------------------------------------------------------------------
-- VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* Returns the current package version number
* return: Version string in format X.Y.Z (e.g., '2.1.0')
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* Returns comprehensive build information including version, date, and author
* return: Formatted string with complete build details
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* Returns the version history with recent changes
* return: Multi-line string with version history
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,625 @@
create or replace PACKAGE CT_MRDS.ENV_MANAGER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select ENV_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.2.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2025-12-20 10:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.2.0 (2025-12-20): Added error codes for parallel execution support (CODE_INVALID_PARALLEL_DEGREE -20110, CODE_PARALLEL_EXECUTION_FAILED -20111)' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-22): Added package hash tracking and automatic change detection system (SHA256 hashing)' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-22): Added package versioning system with centralized version management functions' || CHR(13)||CHR(10) ||
'2.1.0 (2025-10-15): Added ANALYZE_VALIDATION_ERRORS function for comprehensive CSV validation analysis' || CHR(13)||CHR(10) ||
'2.0.0 (2025-10-01): Added LOG_PROCESS_ERROR procedure with enhanced error diagnostics and stack traces' || CHR(13)||CHR(10) ||
'1.5.0 (2025-09-20): Added console logging support with gvConsoleLoggingEnabled configuration' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-01): Initial release with error management and configuration system';
TYPE Error_Record IS RECORD (
code PLS_INTEGER,
message VARCHAR2(4000)
);
TYPE tErrorList IS TABLE OF Error_Record INDEX BY PLS_INTEGER;
Errors tErrorList;
guid VARCHAR2(32);
gvEnv VARCHAR2(200);
gvUsername VARCHAR2(128);
gvOsuser VARCHAR2(128);
gvMachine VARCHAR2(64);
gvModule VARCHAR2(64);
gvNameSpace VARCHAR2(200);
gvRegion VARCHAR2(200);
gvDataBucketName VARCHAR2(200);
gvInboxBucketName VARCHAR2(200);
gvArchiveBucketName VARCHAR2(200);
gvDataBucketUri VARCHAR2(200);
gvInboxBucketUri VARCHAR2(200);
gvArchiveBucketUri VARCHAR2(200);
gvCredentialName VARCHAR2(200);
-- Overwritten by variable "LoggingEnabled" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
gvLoggingEnabled VARCHAR2(3) := 'ON'; -- 'ON' or 'OFF'
-- Overwritten by variable "MinLogLevel" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
-- Possible values: DEBUG ,INFO ,WARNING ,ERROR
gvMinLogLevel VARCHAR2(10) := 'DEBUG';
-- Overwritten by variable "DefaultDateFormat" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
gvDefaultDateFormat VARCHAR2(200) := 'DD/MM/YYYY HH24:MI:SS';
-- Overwritten by variable "ConsoleLoggingEnabled" in A_FILE_MANAGER_CONFIG.CONFIG_VARIABLE table
gvConsoleLoggingEnabled VARCHAR2(3) := 'ON'; -- 'ON' or 'OFF'
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgSourceFileConfigKey PLS_INTEGER;
vgMsgTmp VARCHAR2(32000);
--Exceptions
ERR_EMPTY_FILEURI_AND_RECKEY EXCEPTION;
CODE_EMPTY_FILEURI_AND_RECKEY CONSTANT PLS_INTEGER := -20001;
MSG_EMPTY_FILEURI_AND_RECKEY VARCHAR2(4000) := 'Either pFileUri or pSourceFileReceivedKey must be not null';
PRAGMA EXCEPTION_INIT( ERR_EMPTY_FILEURI_AND_RECKEY
,CODE_EMPTY_FILEURI_AND_RECKEY);
ERR_NO_CONFIG_MATCH_FOR_FILEURI EXCEPTION;
CODE_NO_CONFIG_MATCH_FOR_FILEURI CONSTANT PLS_INTEGER := -20002;
MSG_NO_CONFIG_MATCH_FOR_FILEURI VARCHAR2(4000) := 'No match for source file in A_SOURCE_FILE_CONFIG table'
||cgBL||' The file provided in parameter: pFileUri does not have '
||cgBL||' coresponding configuration in A_SOURCE_FILE_CONFIG table';
PRAGMA EXCEPTION_INIT( ERR_NO_CONFIG_MATCH_FOR_FILEURI
,CODE_NO_CONFIG_MATCH_FOR_FILEURI);
ERR_MULTIPLE_MATCH_FOR_SRCFILE EXCEPTION;
CODE_MULTIPLE_MATCH_FOR_SRCFILE CONSTANT PLS_INTEGER := -20003;
MSG_MULTIPLE_MATCH_FOR_SRCFILE VARCHAR2(4000) := 'Multiple match for source file in A_SOURCE_FILE_CONFIG table';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_MATCH_FOR_SRCFILE
,CODE_MULTIPLE_MATCH_FOR_SRCFILE);
ERR_MISSING_COLUMN_DATE_FORMAT EXCEPTION;
CODE_MISSING_COLUMN_DATE_FORMAT CONSTANT PLS_INTEGER := -20004;
MSG_MISSING_COLUMN_DATE_FORMAT VARCHAR2(4000) := 'Missing entry in config table: A_COLUMN_DATE_FORMAT primary key(TEMPLATE_TABLE_NAME, COLUMN_NAME)'
||cgBL||' Remember: each column which data_type IN (''DATE'', ''TIMESTAMP'')'
||cgBL||' should have DateFormat specified in A_COLUMN_DATE_FORMAT table '
||cgBL||' for example: ''YYYY-MM-DD''';
PRAGMA EXCEPTION_INIT( ERR_MISSING_COLUMN_DATE_FORMAT
,CODE_MISSING_COLUMN_DATE_FORMAT);
ERR_MULTIPLE_COLUMN_DATE_FORMAT EXCEPTION;
CODE_MULTIPLE_COLUMN_DATE_FORMAT CONSTANT PLS_INTEGER := -20005;
MSG_MULTIPLE_COLUMN_DATE_FORMAT VARCHAR2(4000) := 'Multiple records for date format in A_COLUMN_DATE_FORMAT table'
||cgBL||' There should be only one format specified for each DAT/TIMESTAMP column';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_COLUMN_DATE_FORMAT
,CODE_MULTIPLE_COLUMN_DATE_FORMAT);
ERR_DIDNT_GET_LOAD_OPERATION_ID EXCEPTION;
CODE_DIDNT_GET_LOAD_OPERATION_ID CONSTANT PLS_INTEGER := -20006;
MSG_DIDNT_GET_LOAD_OPERATION_ID VARCHAR2(4000) := 'Didnt get load operation id from external table validation';
PRAGMA EXCEPTION_INIT( ERR_DIDNT_GET_LOAD_OPERATION_ID
,CODE_DIDNT_GET_LOAD_OPERATION_ID);
ERR_NO_CONFIG_FOR_RECEIVED_FILE EXCEPTION;
CODE_NO_CONFIG_FOR_RECEIVED_FILE CONSTANT PLS_INTEGER := -20007;
MSG_NO_CONFIG_FOR_RECEIVED_FILE VARCHAR2(4000) := 'No match for received source file in A_SOURCE_FILE_CONFIG '
||cgBL||' or missing data in A_SOURCE_FILE_RECEIVED table for provided pSourceFileReceivedKey parameter';
PRAGMA EXCEPTION_INIT( ERR_NO_CONFIG_FOR_RECEIVED_FILE
,CODE_NO_CONFIG_FOR_RECEIVED_FILE);
ERR_MULTI_CONFIG_FOR_RECEIVED_FILE EXCEPTION;
CODE_MULTI_CONFIG_FOR_RECEIVED_FILE CONSTANT PLS_INTEGER := -20008;
MSG_MULTI_CONFIG_FOR_RECEIVED_FILE VARCHAR2(4000) := 'Multiple matchs for received source file in A_SOURCE_FILE_CONFIG';
PRAGMA EXCEPTION_INIT( ERR_MULTI_CONFIG_FOR_RECEIVED_FILE
,CODE_MULTI_CONFIG_FOR_RECEIVED_FILE);
ERR_FILE_NOT_FOUND_ON_CLOUD EXCEPTION;
CODE_FILE_NOT_FOUND_ON_CLOUD CONSTANT PLS_INTEGER := -20009;
MSG_FILE_NOT_FOUND_ON_CLOUD VARCHAR2(4000) := 'File not found on the cloud';
PRAGMA EXCEPTION_INIT( ERR_FILE_NOT_FOUND_ON_CLOUD
,CODE_FILE_NOT_FOUND_ON_CLOUD);
ERR_FILE_VALIDATION_FAILED EXCEPTION;
CODE_FILE_VALIDATION_FAILED CONSTANT PLS_INTEGER := -20010;
MSG_FILE_VALIDATION_FAILED VARCHAR2(4000) := 'File validation failed';
PRAGMA EXCEPTION_INIT( ERR_FILE_VALIDATION_FAILED
,CODE_FILE_VALIDATION_FAILED);
ERR_EXCESS_COLUMNS_DETECTED EXCEPTION;
CODE_EXCESS_COLUMNS_DETECTED CONSTANT PLS_INTEGER := -20011;
MSG_EXCESS_COLUMNS_DETECTED VARCHAR2(4000) := 'CSV file contains more columns than template allows';
PRAGMA EXCEPTION_INIT( ERR_EXCESS_COLUMNS_DETECTED
,CODE_EXCESS_COLUMNS_DETECTED);
ERR_NO_CONFIG_MATCH EXCEPTION;
CODE_NO_CONFIG_MATCH CONSTANT PLS_INTEGER := -20012;
MSG_NO_CONFIG_MATCH VARCHAR2(4000) := 'No match for specified parameters in A_SOURCE_FILE_CONFIG table';
PRAGMA EXCEPTION_INIT( ERR_NO_CONFIG_MATCH
,CODE_NO_CONFIG_MATCH);
ERR_UNKNOWN_PREFIX EXCEPTION;
CODE_UNKNOWN_PREFIX CONSTANT PLS_INTEGER := -20013;
MSG_UNKNOWN_PREFIX VARCHAR2(4000) := 'Unknown prefix';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN_PREFIX
,CODE_UNKNOWN_PREFIX);
ERR_TABLE_NOT_EXISTS EXCEPTION;
CODE_TABLE_NOT_EXISTS CONSTANT PLS_INTEGER := -20014;
MSG_TABLE_NOT_EXISTS VARCHAR2(4000) := 'Table does not exist';
PRAGMA EXCEPTION_INIT( ERR_TABLE_NOT_EXISTS
,CODE_TABLE_NOT_EXISTS);
ERR_COLUMN_NOT_EXISTS EXCEPTION;
CODE_COLUMN_NOT_EXISTS CONSTANT PLS_INTEGER := -20015;
MSG_COLUMN_NOT_EXISTS VARCHAR2(4000) := 'Column does not exist in table';
PRAGMA EXCEPTION_INIT( ERR_COLUMN_NOT_EXISTS
,CODE_COLUMN_NOT_EXISTS);
ERR_UNSUPPORTED_DATA_TYPE EXCEPTION;
CODE_UNSUPPORTED_DATA_TYPE CONSTANT PLS_INTEGER := -20016;
MSG_UNSUPPORTED_DATA_TYPE VARCHAR2(4000) := 'Unsupported data type';
PRAGMA EXCEPTION_INIT( ERR_UNSUPPORTED_DATA_TYPE
,CODE_UNSUPPORTED_DATA_TYPE);
ERR_MISSING_SOURCE_KEY EXCEPTION;
CODE_MISSING_SOURCE_KEY CONSTANT PLS_INTEGER := -20017;
MSG_MISSING_SOURCE_KEY VARCHAR2(4000) := 'The Source was not found in parent table A_SOURCE';
PRAGMA EXCEPTION_INIT( ERR_MISSING_SOURCE_KEY
,CODE_MISSING_SOURCE_KEY);
ERR_NULL_SOURCE_FILE_CONFIG_KEY EXCEPTION;
CODE_NULL_SOURCE_FILE_CONFIG_KEY CONSTANT PLS_INTEGER := -20018;
MSG_NULL_SOURCE_FILE_CONFIG_KEY VARCHAR2(4000) := 'No entry in A_SOURCE_FILE_CONFIG table for specified A_SOURCE_FILE_CONFIG_KEY';
PRAGMA EXCEPTION_INIT( ERR_NULL_SOURCE_FILE_CONFIG_KEY
,CODE_NULL_SOURCE_FILE_CONFIG_KEY);
ERR_DUPLICATED_SOURCE_KEY EXCEPTION;
CODE_DUPLICATED_SOURCE_KEY CONSTANT PLS_INTEGER := -20019;
MSG_DUPLICATED_SOURCE_KEY VARCHAR2(4000) := 'The Source already exists in the A_SOURCE table';
PRAGMA EXCEPTION_INIT( ERR_DUPLICATED_SOURCE_KEY
,CODE_DUPLICATED_SOURCE_KEY);
ERR_MISSING_CONTAINER_CONFIG EXCEPTION;
CODE_MISSING_CONTAINER_CONFIG CONSTANT PLS_INTEGER := -20020;
MSG_MISSING_CONTAINER_CONFIG VARCHAR2(4000) := 'No match in A_SOURCE_FILE_CONFIG table where SOURCE_FILE_TYPE=''CONTAINER'' and specified SOURCE_FILE_ID';
PRAGMA EXCEPTION_INIT( ERR_MISSING_CONTAINER_CONFIG
,CODE_MISSING_CONTAINER_CONFIG);
ERR_MULTIPLE_CONTAINER_ENTRIES EXCEPTION;
CODE_MULTIPLE_CONTAINER_ENTRIES CONSTANT PLS_INTEGER := -20021;
MSG_MULTIPLE_CONTAINER_ENTRIES VARCHAR2(4000) := 'Multiple matches in A_SOURCE_FILE_CONFIG table where SOURCE_FILE_TYPE=''CONTAINER'' and specified SOURCE_FILE_ID';
PRAGMA EXCEPTION_INIT( ERR_MULTIPLE_CONTAINER_ENTRIES
,CODE_MULTIPLE_CONTAINER_ENTRIES);
ERR_WRONG_DESTINATION_PARAM EXCEPTION;
CODE_WRONG_DESTINATION_PARAM CONSTANT PLS_INTEGER := -20022;
MSG_WRONG_DESTINATION_PARAM VARCHAR2(4000) := 'Wrong destination parameter provided.';
PRAGMA EXCEPTION_INIT( ERR_WRONG_DESTINATION_PARAM
,CODE_WRONG_DESTINATION_PARAM);
ERR_FILE_NOT_EXISTS_ON_CLOUD EXCEPTION;
CODE_FILE_NOT_EXISTS_ON_CLOUD CONSTANT PLS_INTEGER := -20023;
MSG_FILE_NOT_EXISTS_ON_CLOUD VARCHAR2(4000) := 'File not exists on cloud.';
PRAGMA EXCEPTION_INIT( ERR_FILE_NOT_EXISTS_ON_CLOUD
,CODE_FILE_NOT_EXISTS_ON_CLOUD);
ERR_FILE_ALREADY_REGISTERED EXCEPTION;
CODE_FILE_ALREADY_REGISTERED CONSTANT PLS_INTEGER := -20024;
MSG_FILE_ALREADY_REGISTERED VARCHAR2(4000) := 'File already registered in A_SOURCE_FILE_RECEIVED table.';
PRAGMA EXCEPTION_INIT( ERR_FILE_ALREADY_REGISTERED
,CODE_FILE_ALREADY_REGISTERED);
ERR_WRONG_DATE_TIMESTAMP_FORMAT EXCEPTION;
CODE_WRONG_DATE_TIMESTAMP_FORMAT CONSTANT PLS_INTEGER := -20025;
MSG_WRONG_DATE_TIMESTAMP_FORMAT VARCHAR2(4000) := 'Provided DATE or TIMESTAMP format has errors (possible duplicated codes, ex: ''DD'').';
PRAGMA EXCEPTION_INIT( ERR_WRONG_DATE_TIMESTAMP_FORMAT
,CODE_WRONG_DATE_TIMESTAMP_FORMAT);
ERR_ENVIRONMENT_NOT_SET EXCEPTION;
CODE_ENVIRONMENT_NOT_SET CONSTANT PLS_INTEGER := -20026;
MSG_ENVIRONMENT_NOT_SET VARCHAR2(4000) := 'EnvironmentID not set'
||cgBL||' Information about environment is needed to get proper configuration values.'
||cgBL||' It can be set up in two different ways:'
||cgBL||' 1. Set it on session level: execute DBMS_SESSION.SET_IDENTIFIER (client_id => ''dev'')'
||cgBL||' 2. Set it on configuration level: Insert into CT_MRDS.A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID,CONFIG_VARIABLE,CONFIG_VARIABLE_VALUE) values (''default'',''environment_id'',''dev'')'
||cgBL||' Session level setup (1.) takes precedence over configuration level one (2.)'
;
PRAGMA EXCEPTION_INIT( ERR_ENVIRONMENT_NOT_SET
,CODE_ENVIRONMENT_NOT_SET);
ERR_CONFIG_VARIABLE_NOT_SET EXCEPTION;
CODE_CONFIG_VARIABLE_NOT_SET CONSTANT PLS_INTEGER := -20027;
MSG_CONFIG_VARIABLE_NOT_SET VARCHAR2(4000) := 'Missing configuration value in A_FILE_MANAGER_CONFIG';
PRAGMA EXCEPTION_INIT( ERR_CONFIG_VARIABLE_NOT_SET
,CODE_CONFIG_VARIABLE_NOT_SET);
ERR_NOT_INPUT_SOURCE_FILE_TYPE EXCEPTION;
CODE_NOT_INPUT_SOURCE_FILE_TYPE CONSTANT PLS_INTEGER := -20028;
MSG_NOT_INPUT_SOURCE_FILE_TYPE VARCHAR2(4000) := 'Archival can be executed only for A_SOURCE_FILE_CONFIG_KEY where SOURCE_FILE_TYPE=''INPUT''';
PRAGMA EXCEPTION_INIT( ERR_NOT_INPUT_SOURCE_FILE_TYPE
,CODE_NOT_INPUT_SOURCE_FILE_TYPE);
ERR_EXP_DATA_FOR_ARCH_FAILED EXCEPTION;
CODE_EXP_DATA_FOR_ARCH_FAILED CONSTANT PLS_INTEGER := -20029;
MSG_EXP_DATA_FOR_ARCH_FAILED VARCHAR2(4000) := 'Export data for archival failed.';
PRAGMA EXCEPTION_INIT( ERR_EXP_DATA_FOR_ARCH_FAILED
,CODE_EXP_DATA_FOR_ARCH_FAILED);
ERR_RESTORE_FILE_FROM_TRASH EXCEPTION;
CODE_RESTORE_FILE_FROM_TRASH CONSTANT PLS_INTEGER := -20030;
MSG_RESTORE_FILE_FROM_TRASH VARCHAR2(4000) := 'Unexpected issues occured while archival process. Restoration of exported files failed.';
PRAGMA EXCEPTION_INIT( ERR_RESTORE_FILE_FROM_TRASH
,CODE_RESTORE_FILE_FROM_TRASH);
ERR_CHANGE_STAT_TO_ARCHIVED_FAILED EXCEPTION;
CODE_CHANGE_STAT_TO_ARCHIVED_FAILED CONSTANT PLS_INTEGER := -20031;
MSG_CHANGE_STAT_TO_ARCHIVED_FAILED VARCHAR2(4000) := 'Failed to change file status to: ARCHIVED in A_SOURCE_FILE_RECEIVED table.';
PRAGMA EXCEPTION_INIT( ERR_CHANGE_STAT_TO_ARCHIVED_FAILED
,CODE_CHANGE_STAT_TO_ARCHIVED_FAILED);
ERR_MOVE_FILE_TO_TRASH_FAILED EXCEPTION;
CODE_MOVE_FILE_TO_TRASH_FAILED CONSTANT PLS_INTEGER := -20032;
MSG_MOVE_FILE_TO_TRASH_FAILED VARCHAR2(4000) := 'FAILED to move file to TRASH before DROPPING it.';
PRAGMA EXCEPTION_INIT( ERR_MOVE_FILE_TO_TRASH_FAILED
,CODE_MOVE_FILE_TO_TRASH_FAILED);
ERR_DROP_EXPORTED_FILES_FAILED EXCEPTION;
CODE_DROP_EXPORTED_FILES_FAILED CONSTANT PLS_INTEGER := -20033;
MSG_DROP_EXPORTED_FILES_FAILED VARCHAR2(4000) := 'FAILED to move file to TRASH before DROPPING it.';
PRAGMA EXCEPTION_INIT( ERR_DROP_EXPORTED_FILES_FAILED
,CODE_DROP_EXPORTED_FILES_FAILED);
ERR_INVALID_BUCKET_AREA EXCEPTION;
CODE_INVALID_BUCKET_AREA CONSTANT PLS_INTEGER := -20034;
MSG_INVALID_BUCKET_AREA VARCHAR2(4000) := 'Invalid bucket area specified. Valid values: INBOX, ODS, DATA, ARCHIVE';
PRAGMA EXCEPTION_INIT( ERR_INVALID_BUCKET_AREA
,CODE_INVALID_BUCKET_AREA);
ERR_INVALID_PARALLEL_DEGREE EXCEPTION;
CODE_INVALID_PARALLEL_DEGREE CONSTANT PLS_INTEGER := -20110;
MSG_INVALID_PARALLEL_DEGREE VARCHAR2(4000) := 'Invalid parallel degree parameter. Must be between 1 and 16';
PRAGMA EXCEPTION_INIT( ERR_INVALID_PARALLEL_DEGREE
,CODE_INVALID_PARALLEL_DEGREE);
ERR_PARALLEL_EXECUTION_FAILED EXCEPTION;
CODE_PARALLEL_EXECUTION_FAILED CONSTANT PLS_INTEGER := -20111;
MSG_PARALLEL_EXECUTION_FAILED VARCHAR2(4000) := 'Parallel execution failed';
PRAGMA EXCEPTION_INIT( ERR_PARALLEL_EXECUTION_FAILED
,CODE_PARALLEL_EXECUTION_FAILED);
ERR_UNKNOWN EXCEPTION;
CODE_UNKNOWN CONSTANT PLS_INTEGER := -20999;
MSG_UNKNOWN VARCHAR2(4000) := 'Unknown Error Occured';
PRAGMA EXCEPTION_INIT( ERR_UNKNOWN
,CODE_UNKNOWN);
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
/**
* @name LOG_PROCESS_EVENT
* @desc Insert a new log record into A_PROCESS_LOG table.
* Also outputs to console if gvConsoleLoggingEnabled = 'ON'.
* Respects logging level configuration (gvMinLogLevel).
* @example ENV_MANAGER.LOG_PROCESS_EVENT('Process completed successfully', 'INFO', 'pParam1=value1');
* @ex_rslt Record inserted into A_PROCESS_LOG table and optionally displayed in console output
**/
PROCEDURE LOG_PROCESS_EVENT (
pLogMessage VARCHAR2
,pLogLevel VARCHAR2 DEFAULT 'ERROR'
,pParameters VARCHAR2 DEFAULT NULL
,pProcessName VARCHAR2 DEFAULT 'FILE_MANAGER'
);
/**
* @name LOG_PROCESS_ERROR
* @desc Insert a detailed error record into A_PROCESS_LOG table with full stack trace, backtrace, and call stack.
* This procedure captures comprehensive error information for debugging purposes while
* allowing clean user-facing error messages to be raised separately.
* @param pLogMessage - Base error message description
* @param pParameters - Procedure parameters for context
* @param pProcessName - Name of the calling process/package
* @ex_rslt Record inserted into A_PROCESS_LOG table with complete error stack information
*/
PROCEDURE LOG_PROCESS_ERROR (
pLogMessage VARCHAR2
,pParameters VARCHAR2 DEFAULT NULL
,pProcessName VARCHAR2 DEFAULT 'FILE_MANAGER'
);
/**
* @name INIT_ERRORS
* @desc Loads data into Errors array.
* Errors array is a list of Record(Error_Code, Error_Message) index by Error_Code.
* Called automatically during package initialization.
* @example Called automatically when package is first referenced
* @ex_rslt Errors array populated with all error codes and messages
**/
PROCEDURE INIT_ERRORS;
/**
* @name GET_DEFAULT_ENV
* @desc It returns string with name of default environment.
* Return string is A_FILE_MANAGER_CONFIG.ENVIRONMENT_ID value.
* @example select ENV_MANAGER.GET_DEFAULT_ENV() from dual;
* @ex_rslt dev
**/
FUNCTION GET_DEFAULT_ENV
RETURN VARCHAR2;
/**
* @name INIT_VARIABLES
* @desc For specified pEnv parameter (A_FILE_MANAGER_CONFIG.ENVIRONMENT_ID)
* Assign values to following global package variables:
* - gvNameSpace
* - gvRegion
* - gvCredentialName
* - gvInboxBucketName
* - gvDataBucketName
* - gvArchiveBucketName
* - gvInboxBucketUri
* - gvDataBucketUri
* - gvArchiveBucketUri
* - gvLoggingEnabled
* - gvMinLogLevel
* - gvDefaultDateFormat
* - gvConsoleLoggingEnabled
**/
PROCEDURE INIT_VARIABLES(
pEnv VARCHAR2
);
/**
* @name GET_ERROR_MESSAGE
* @desc It returns string with error message for specified pCode (Error_Code).
* Error message is take from Errors Array loaded by INIT_ERRORS procedure
* @example select ENV_MANAGER.GET_ERROR_MESSAGE(pCode => -20009) from dual;
* @ex_rslt File not found on the cloud
**/
FUNCTION GET_ERROR_MESSAGE(
pCode PLS_INTEGER
) RETURN VARCHAR2;
/**
* @name GET_ERROR_STACK
* @desc It returns string with all possible error stack info.
* Error message is take from Errors Array loaded by INIT_ERRORS procedure
* @example
* select ENV_MANAGER.GET_ERROR_STACK(
* pFormat => 'OUTPUT'
* ,pCode => -20009
* ,pSourceFileReceivedKey => NULL)
* from dual
* @ex_rslt
* ------------------------------------------------------+
* Error Message:
* ORA-0000: normal, successful completion
* -------------------------------------------------------
* Error Stack:
* -------------------------------------------------------
* Error Backtrace:
* ------------------------------------------------------+
**/
FUNCTION GET_ERROR_STACK(
pFormat VARCHAR2
,pCode PLS_INTEGER
,pSourceFileReceivedKey CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL
) RETURN VARCHAR2;
/**
* @name FORMAT_PARAMETERS
* @desc Formats parameter list for logging purposes.
* Converts SYS.ODCIVARCHAR2LIST to formatted string with proper NULL handling.
* @example select ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST('param1=value1', 'param2=NULL')) from dual;
* @ex_rslt param1=value1 ,
* param2=NULL
**/
FUNCTION FORMAT_PARAMETERS(
pParameterList SYS.ODCIVARCHAR2LIST
) RETURN VARCHAR2;
/**
* @name ANALYZE_VALIDATION_ERRORS
* @desc Analyzes CSV validation errors and generates detailed diagnostic report.
* Compares CSV structure with template table and provides specific error analysis.
* Includes suggested solutions for common validation issues.
* @param pValidationLogTable - Name of validation log table (e.g., VALIDATE$242_LOG)
* @param pTemplateSchema - Schema of template table (e.g., CT_ET_TEMPLATES)
* @param pTemplateTable - Name of template table (e.g., MOCK_PROC_TABLE)
* @param pCsvFileUri - URI of CSV file being validated
* @example SELECT ENV_MANAGER.ANALYZE_VALIDATION_ERRORS('VALIDATE$242_LOG', 'CT_ET_TEMPLATES', 'MOCK_PROC_TABLE', 'https://...') FROM DUAL;
* @ex_rslt Detailed validation analysis report with column mismatches and solutions
**/
FUNCTION ANALYZE_VALIDATION_ERRORS(
pValidationLogTable VARCHAR2,
pTemplateSchema VARCHAR2,
pTemplateTable VARCHAR2,
pCsvFileUri VARCHAR2
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the ENV_MANAGER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT ENV_MANAGER.GET_VERSION() FROM DUAL;
* @ex_rslt 3.0.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Formatted for display in logs or monitoring systems.
* @example SELECT ENV_MANAGER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: ENV_MANAGER
* Version: 3.0.0
* Build Date: 2025-10-22 16:00:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Shows evolution of package features over time.
* @example SELECT ENV_MANAGER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt ENV_MANAGER Version History:
* 3.0.0 (2025-10-22): Added package versioning system...
* 2.1.0 (2025-10-15): Added ANALYZE_VALIDATION_ERRORS function...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
/**
* @name GET_PACKAGE_VERSION_INFO
* @desc Universal function to get formatted version information for any package.
* This centralized function is used by all packages in the system.
* @param pPackageName - Name of the package
* @param pVersion - Version string (MAJOR.MINOR.PATCH format)
* @param pBuildDate - Build date timestamp
* @param pAuthor - Package author name
* @example SELECT ENV_MANAGER.GET_PACKAGE_VERSION_INFO('FILE_MANAGER', '2.1.0', '2025-10-22 15:00:00', 'Grzegorz Michalski') FROM DUAL;
* @ex_rslt Package: FILE_MANAGER
* Version: 2.1.0
* Build Date: 2025-10-22 15:00:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_PACKAGE_VERSION_INFO(
pPackageName VARCHAR2,
pVersion VARCHAR2,
pBuildDate VARCHAR2,
pAuthor VARCHAR2
) RETURN VARCHAR2;
/**
* @name FORMAT_VERSION_HISTORY
* @desc Universal function to format version history for any package.
* Adds package name header and proper formatting.
* @param pPackageName - Name of the package
* @param pVersionHistory - Complete version history text
* @example SELECT ENV_MANAGER.FORMAT_VERSION_HISTORY('FILE_MANAGER', '2.1.0 (2025-10-22): Export procedures...') FROM DUAL;
* @ex_rslt FILE_MANAGER Version History:
* 2.1.0 (2025-10-22): Export procedures...
**/
FUNCTION FORMAT_VERSION_HISTORY(
pPackageName VARCHAR2,
pVersionHistory VARCHAR2
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE HASH + CHANGE DETECTION FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name CALCULATE_PACKAGE_HASH
* @desc Calculates SHA256 hash of package source code from ALL_SOURCE.
* Returns hash for both SPEC and BODY (if exists).
* Used for automatic change detection.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @param pPackageType - Type of package code ('PACKAGE' for SPEC, 'PACKAGE BODY' for BODY)
* @example SELECT ENV_MANAGER.CALCULATE_PACKAGE_HASH('CT_MRDS', 'FILE_MANAGER', 'PACKAGE') FROM DUAL;
* @ex_rslt A7B3C5D9E8F1234567890ABCDEF... (64-character SHA256 hash)
**/
FUNCTION CALCULATE_PACKAGE_HASH(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2,
pPackageType VARCHAR2 -- 'PACKAGE' or 'PACKAGE BODY'
) RETURN VARCHAR2;
/**
* @name TRACK_PACKAGE_VERSION
* @desc Records package version and source code hash in A_PACKAGE_VERSION_TRACKING table.
* Automatically detects if source code changed without version update.
* Should be called after every package deployment.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @param pPackageVersion - Current version from PACKAGE_VERSION constant
* @param pPackageBuildDate - Build date from PACKAGE_BUILD_DATE constant
* @param pPackageAuthor - Author from PACKAGE_AUTHOR constant
* @example EXEC ENV_MANAGER.TRACK_PACKAGE_VERSION('CT_MRDS', 'FILE_MANAGER', '3.2.0', '2025-10-22 16:30:00', 'Grzegorz Michalski');
* @ex_rslt Record inserted into A_PACKAGE_VERSION_TRACKING with change detection status
**/
PROCEDURE TRACK_PACKAGE_VERSION(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2,
pPackageVersion VARCHAR2,
pPackageBuildDate VARCHAR2,
pPackageAuthor VARCHAR2
);
/**
* @name CHECK_PACKAGE_CHANGES
* @desc Checks if package source code has changed since last tracking.
* Compares current hash with last recorded hash in A_PACKAGE_VERSION_TRACKING.
* Returns detailed change detection report.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @example SELECT ENV_MANAGER.CHECK_PACKAGE_CHANGES('CT_MRDS', 'FILE_MANAGER') FROM DUAL;
* @ex_rslt WARNING: Package changed without version update!
* Last Version: 3.2.0
* Current Hash (SPEC): A7B3C5D9...
* Last Hash (SPEC): B8C4D6E0...
* RECOMMENDATION: Update PACKAGE_VERSION and PACKAGE_BUILD_DATE
**/
FUNCTION CHECK_PACKAGE_CHANGES(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2
) RETURN VARCHAR2;
/**
* @name GET_PACKAGE_HASH_INFO
* @desc Returns formatted information about package hash and tracking history.
* Includes current hash, last tracked hash, and change detection status.
* @param pPackageOwner - Schema owner of the package
* @param pPackageName - Name of the package
* @example SELECT ENV_MANAGER.GET_PACKAGE_HASH_INFO('CT_MRDS', 'FILE_MANAGER') FROM DUAL;
* @ex_rslt Package: CT_MRDS.FILE_MANAGER
* Current Version: 3.2.0
* Current Hash (SPEC): A7B3C5D9...
* Last Tracked: 2025-10-22 16:30:00
* Status: OK - No changes detected
**/
FUNCTION GET_PACKAGE_HASH_INFO(
pPackageOwner VARCHAR2,
pPackageName VARCHAR2
) RETURN VARCHAR2;
END ENV_MANAGER;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,279 @@
create or replace PACKAGE CT_MRDS.FILE_ARCHIVER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select LOGGING_AND_ERROR_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.3.0';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-11 12:00:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.3.0 (2026-02-11): Added IS_ARCHIVE_ENABLED and IS_KEEP_IN_TRASH columns to A_SOURCE_FILE_CONFIG for selective archiving and config-based TRASH policy. Removed pKeepInTrash parameter (now from config). Added ARCHIVE_ALL batch procedure with 3-level granularity (config/source/all). Added GATHER_TABLE_STAT_ALL batch statistics procedure with 3-level granularity. Added RESTORE_FILE_FROM_TRASH and PURGE_TRASH_FOLDER with 3-level granularity' || CHR(13)||CHR(10) ||
'3.2.1 (2026-02-10): Fixed status update - ARCHIVED → ARCHIVED_AND_TRASHED when moving files to TRASH folder (critical bug fix)' || CHR(13)||CHR(10) ||
'3.2.0 (2026-02-06): Added pKeepInTrash parameter (DEFAULT TRUE) to ARCHIVE_TABLE_DATA for TRASH folder retention control - files kept in TRASH subfolder (DATA bucket) by default for safety and compliance' || CHR(13)||CHR(10) ||
'3.1.2 (2026-02-06): Fixed missing PARTITION_YEAR/PARTITION_MONTH assignments in UPDATE statement and export query circular dependency (now filters by workflow_start instead of partition fields)' || CHR(13)||CHR(10) ||
'3.1.1 (2026-02-06): Fixed ORA-01422 error when DBMS_CLOUD.EXPORT_DATA creates multiple parquet files (parallel execution). Now stores archive directory prefix instead of individual filenames' || CHR(13)||CHR(10) ||
'3.1.0 (2026-01-29): Added function overloads for ARCHIVE_TABLE_DATA and GATHER_TABLE_STAT returning SQLCODE for Python library integration' || CHR(13)||CHR(10) ||
'3.0.0 (2026-01-27): MARS-828 - Added flexible archival strategies (MINIMUM_AGE_MONTHS with 0=current month, HYBRID) via ARCHIVAL_STRATEGY configuration' || CHR(13)||CHR(10) ||
'2.0.0 (2025-10-22): Added package versioning system using centralized ENV_MANAGER functions' || CHR(13)||CHR(10) ||
'1.5.0 (2025-10-18): Enhanced ARCHIVE_TABLE_DATA with Hive-style partitioning support' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-15): Initial release with table archival and statistics gathering';
cgBL CONSTANT VARCHAR2(2) := ENV_MANAGER.cgBL;
/**
* @name GET_TABLE_STAT
* @desc Private function to retrieve table statistics for archival processing.
* Returns A_TABLE_STAT record with table metadata and row counts.
* @param pSourceFileConfigKey - Configuration key for source file
* @return CT_MRDS.A_TABLE_STAT%ROWTYPE - Table statistics record
* @private Internal function for archival operations
**/
FUNCTION GET_TABLE_STAT(pSourceFileConfigKey IN NUMBER) RETURN CT_MRDS.A_TABLE_STAT%ROWTYPE;
/**
* @name ARCHIVE_TABLE_DATA
* @desc Wrapper procedure for DBMS_CLOUD.EXPORT_DATA.
* Exports data from table specified by pSourceFileConfigKey(A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY) into PARQUET file on OCI infrustructure.
* Each YEAR_MONTH pair goes to seperate file (implicit partitioning).
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
**/
PROCEDURE ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
);
/**
* @name FN_ARCHIVE_TABLE_DATA
* @desc Function wrapper for ARCHIVE_TABLE_DATA procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_TABLE_DATA procedure and captures execution result.
* TRASH policy is controlled by A_SOURCE_FILE_CONFIG.IS_KEEP_IN_TRASH column ('Y'=keep in TRASH, 'N'=delete immediately).
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_TABLE_DATA(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_ARCHIVE_TABLE_DATA (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
) RETURN PLS_INTEGER;
/**
* @name GATHER_TABLE_STAT
* @desc Gather info about EXTERNAL TABLE specified by pSourceFileConfigKey parameter (A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY).
* Data is inserted into A_TABLE_STAT and A_TABLE_STAT_HIST.
**/
PROCEDURE GATHER_TABLE_STAT (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
);
/**
* @name FN_GATHER_TABLE_STAT
* @desc Function wrapper for GATHER_TABLE_STAT procedure.
* Returns SQLCODE for Python library integration.
* Calls the main GATHER_TABLE_STAT procedure and captures execution result.
* @example SELECT FILE_ARCHIVER.FN_GATHER_TABLE_STAT(pSourceFileConfigKey => 123) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_GATHER_TABLE_STAT (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
) RETURN PLS_INTEGER;
/**
* @name GATHER_TABLE_STAT_ALL
* @desc Multi-level batch statistics gathering procedure with three granularity levels.
* Processes configurations based on IS_ARCHIVE_ENABLED setting (when pOnlyEnabled=TRUE).
* Gathers statistics for external tables and inserts data into A_TABLE_STAT and A_TABLE_STAT_HIST.
* @param pSourceFileConfigKey - (LEVEL 1) Gather stats for specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Gather stats for all tables in source system (e.g., 'LM', 'C2D') (medium priority)
* @param pGatherAll - (LEVEL 3) When TRUE, gather stats for ALL tables across all sources (lowest priority)
* @param pOnlyEnabled - When TRUE (default), only process tables with IS_ARCHIVE_ENABLED='Y'
* @example -- Level 1: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pSourceFileConfigKey => 123);
* @example -- Level 2: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pSourceKey => 'LM');
* @example -- Level 3: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE);
* @example -- All tables regardless of IS_ARCHIVE_ENABLED: CALL FILE_ARCHIVER.GATHER_TABLE_STAT_ALL(pGatherAll => TRUE, pOnlyEnabled => FALSE);
**/
PROCEDURE GATHER_TABLE_STAT_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pGatherAll IN BOOLEAN DEFAULT FALSE,
pOnlyEnabled IN BOOLEAN DEFAULT TRUE
);
/**
* @name FN_GATHER_TABLE_STAT_ALL
* @desc Function wrapper for GATHER_TABLE_STAT_ALL procedure.
* Returns SQLCODE for Python library integration.
* Calls the main GATHER_TABLE_STAT_ALL procedure and captures execution result.
* @param pSourceFileConfigKey - (LEVEL 1) Gather stats for specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Gather stats for all tables in source system (medium priority)
* @param pGatherAll - (LEVEL 3) When TRUE, gather stats for ALL tables across all sources (lowest priority)
* @param pOnlyEnabled - When TRUE (default), only process tables with IS_ARCHIVE_ENABLED='Y'
* @example SELECT FILE_ARCHIVER.FN_GATHER_TABLE_STAT_ALL(pSourceKey => 'LM') FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_GATHER_TABLE_STAT_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pGatherAll IN BOOLEAN DEFAULT FALSE,
pOnlyEnabled IN BOOLEAN DEFAULT TRUE
) RETURN PLS_INTEGER;
/**
* @name ARCHIVE_ALL
* @desc Multi-level batch archival procedure with three granularity levels.
* Only processes configurations where IS_ARCHIVE_ENABLED='Y'.
* TRASH policy for each table is controlled by individual IS_KEEP_IN_TRASH column.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (e.g., 'LM', 'C2D') (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)
* @example -- Level 1: CALL FILE_ARCHIVER.ARCHIVE_ALL(pSourceFileConfigKey => 123);
* @example -- Level 2: CALL FILE_ARCHIVER.ARCHIVE_ALL(pSourceKey => 'LM');
* @example -- Level 3: CALL FILE_ARCHIVER.ARCHIVE_ALL(pArchiveAll => TRUE);
**/
PROCEDURE ARCHIVE_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pArchiveAll IN BOOLEAN DEFAULT FALSE
);
/**
* @name FN_ARCHIVE_ALL
* @desc Function wrapper for ARCHIVE_ALL procedure.
* Returns SQLCODE for Python library integration.
* Calls the main ARCHIVE_ALL procedure and captures execution result.
* @param pSourceFileConfigKey - (LEVEL 1) Archive specific configuration key (highest priority)
* @param pSourceKey - (LEVEL 2) Archive all enabled tables for source system (medium priority)
* @param pArchiveAll - (LEVEL 3) When TRUE, archive ALL enabled tables across all sources (lowest priority)
* @example SELECT FILE_ARCHIVER.FN_ARCHIVE_ALL(pSourceKey => 'LM') FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION FN_ARCHIVE_ALL (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE DEFAULT NULL,
pArchiveAll IN BOOLEAN DEFAULT FALSE
) RETURN PLS_INTEGER;
/**
* @name RESTORE_FILE_FROM_TRASH
* @desc Restores files from TRASH folder back to ODS at three different granularity levels.
* Moves files from TRASH subfolder back to ODS subfolder in DATA bucket.
* Updates status from ARCHIVED_AND_TRASHED to INGESTED and clears archival metadata.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to restore by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Restore all files for specific configuration key (medium priority)
* @param pRestoreAll - (LEVEL 1) When TRUE, restore ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example -- Restore single file: CALL FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pSourceFileReceivedKey => 12345);
* @example -- Restore all files for config: CALL FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pSourceFileConfigKey => 341);
* @example -- Restore all TRASH globally: CALL FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pRestoreAll => TRUE);
**/
PROCEDURE RESTORE_FILE_FROM_TRASH (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pRestoreAll IN BOOLEAN DEFAULT FALSE
);
/**
* @name RESTORE_FILE_FROM_TRASH
* @desc Function overload for RESTORE_FILE_FROM_TRASH procedure.
* Returns SQLCODE for Python library integration.
* Calls the main RESTORE_FILE_FROM_TRASH procedure and captures execution result.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to restore by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Restore all files for specific configuration key (medium priority)
* @param pRestoreAll - (LEVEL 1) When TRUE, restore ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example SELECT FILE_ARCHIVER.RESTORE_FILE_FROM_TRASH(pSourceFileReceivedKey => 12345) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION RESTORE_FILE_FROM_TRASH (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pRestoreAll IN BOOLEAN DEFAULT FALSE
) RETURN PLS_INTEGER;
/**
* @name PURGE_TRASH_FOLDER
* @desc Deletes files from TRASH folder at three different granularity levels.
* Updates status from ARCHIVED_AND_TRASHED to ARCHIVED_AND_PURGED for all affected files.
* WARNING: This operation is irreversible - files are permanently deleted from TRASH.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to delete by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Delete all files for specific configuration key (medium priority)
* @param pPurgeAll - (LEVEL 1) When TRUE, delete ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example -- Delete single file: CALL FILE_ARCHIVER.PURGE_TRASH_FOLDER(pSourceFileReceivedKey => 12345);
* @example -- Delete all files for config: CALL FILE_ARCHIVER.PURGE_TRASH_FOLDER(pSourceFileConfigKey => 341);
* @example -- Delete all TRASH globally: CALL FILE_ARCHIVER.PURGE_TRASH_FOLDER(pPurgeAll => TRUE);
**/
PROCEDURE PURGE_TRASH_FOLDER (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pPurgeAll IN BOOLEAN DEFAULT FALSE
);
/**
* @name PURGE_TRASH_FOLDER
* @desc Function overload for PURGE_TRASH_FOLDER procedure.
* Returns SQLCODE for Python library integration.
* Calls the main PURGE_TRASH_FOLDER procedure and captures execution result.
* WARNING: This operation is irreversible - files are permanently deleted from TRASH.
* @param pSourceFileReceivedKey - (LEVEL 3) Specific file to delete by A_SOURCE_FILE_RECEIVED_KEY (highest priority)
* @param pSourceFileConfigKey - (LEVEL 2) Delete all files for specific configuration key (medium priority)
* @param pPurgeAll - (LEVEL 1) When TRUE, delete ALL files with ARCHIVED_AND_TRASHED status (lowest priority)
* @example SELECT FILE_ARCHIVER.PURGE_TRASH_FOLDER(pSourceFileReceivedKey => 12345) FROM DUAL;
* @ex_rslt 0 (success) or error code
**/
FUNCTION PURGE_TRASH_FOLDER (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE DEFAULT NULL,
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE DEFAULT NULL,
pPurgeAll IN BOOLEAN DEFAULT FALSE
) RETURN PLS_INTEGER;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the FILE_ARCHIVER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT FILE_ARCHIVER.GET_VERSION() FROM DUAL;
* @ex_rslt 2.0.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Uses centralized ENV_MANAGER.GET_PACKAGE_VERSION_INFO function.
* @example SELECT FILE_ARCHIVER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: FILE_ARCHIVER
* Version: 2.0.0
* Build Date: 2025-10-22 16:45:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Uses centralized ENV_MANAGER.FORMAT_VERSION_HISTORY function.
* @example SELECT FILE_ARCHIVER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt FILE_ARCHIVER Version History:
* 2.0.0 (2025-10-22): Added package versioning system...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,639 @@
create or replace PACKAGE CT_MRDS.FILE_MANAGER
AUTHID CURRENT_USER
AS
/**
* General comment for package: Please put comments for functions and procedures as shown in below example.
* It is a standard.
* The structure of comment is used by GET_PACKAGE_DOCUMENTATION function
* which returns documentation text for confluence page (to Copy-Paste it).
**/
-- Example comment:
/**
* @name EX_PROCEDURE_NAME
* @desc Procedure description
* @example select FILE_MANAGER.EX_PROCEDURE_NAME(pParameter => 129) from dual;
* @ex_rslt Example Result
**/
-- Package Version Information (Semantic Versioning: MAJOR.MINOR.PATCH)
PACKAGE_VERSION CONSTANT VARCHAR2(10) := '3.5.1';
PACKAGE_BUILD_DATE CONSTANT VARCHAR2(20) := '2026-02-24 13:35:00';
PACKAGE_AUTHOR CONSTANT VARCHAR2(100) := 'Grzegorz Michalski';
-- Version History (Latest changes first)
VERSION_HISTORY CONSTANT VARCHAR2(4000) :=
'3.5.1 (2026-02-24): Fixed TIMESTAMP field syntax in GENERATE_EXTERNAL_TABLE_PARAMS for SQL*Loader compatibility (CHAR(35) DATE_FORMAT TIMESTAMP MASK format)' || CHR(13)||CHR(10) ||
'3.3.2 (2026-02-20): MARS-828 - Fixed threshold column names in GET_DET_SOURCE_FILE_CONFIG_INFO for MARS-828 compatibility' || CHR(13)||CHR(10) ||
'3.3.1 (2025-11-27): MARS-1046 - Fixed ISO 8601 datetime format parsing with milliseconds and timezone (e.g., 2012-03-02T14:16:23.798+01:00)' || CHR(13)||CHR(10) ||
'3.3.0 (2025-11-26): MARS-1056 - Fixed VARCHAR2 definitions in GENERATE_EXTERNAL_TABLE_PARAMS to preserve CHAR/BYTE semantics from template tables' || CHR(13)||CHR(10) ||
'3.2.1 (2025-11-24): MARS-1049 - Added pEncoding parameter support for CSV character set specification' || CHR(13)||CHR(10) ||
'3.2.0 (2025-10-22): Added package versioning system using centralized ENV_MANAGER functions' || CHR(13)||CHR(10) ||
'3.1.0 (2025-10-20): Enhanced PROCESS_SOURCE_FILE with 6-step validation workflow' || CHR(13)||CHR(10) ||
'3.0.0 (2025-10-15): Separated export procedures into dedicated DATA_EXPORTER package' || CHR(13)||CHR(10) ||
'2.5.0 (2025-10-10): Added DELETE_SOURCE_CASCADE for safe configuration removal' || CHR(13)||CHR(10) ||
'2.0.0 (2025-09-25): Added official path patterns support (INBOX 3-level, ODS 2-level, ARCHIVE 2-level)' || CHR(13)||CHR(10) ||
'1.0.0 (2025-09-01): Initial release with file processing and validation capabilities';
TYPE tSourceFileReceived IS RECORD
(
A_SOURCE_FILE_RECEIVED_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE,
A_SOURCE_FILE_CONFIG_KEY CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_CONFIG_KEY%TYPE,
SOURCE_FILE_PREFIX_INBOX VARCHAR2(430),
SOURCE_FILE_PREFIX_ODS VARCHAR2(430),
SOURCE_FILE_PREFIX_QUARANTINE VARCHAR2(430),
SOURCE_FILE_PREFIX_ARCHIVE VARCHAR2(430),
SOURCE_FILE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.SOURCE_FILE_NAME%TYPE,
RECEPTION_DATE CT_MRDS.A_SOURCE_FILE_RECEIVED.RECEPTION_DATE%TYPE,
PROCESSING_STATUS CT_MRDS.A_SOURCE_FILE_RECEIVED.PROCESSING_STATUS%TYPE,
EXTERNAL_TABLE_NAME CT_MRDS.A_SOURCE_FILE_RECEIVED.EXTERNAL_TABLE_NAME%TYPE
);
cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);
vgSourceFileConfigKey PLS_INTEGER;
vgMsgTmp VARCHAR2(32000);
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_SOURCE_FILE_CONFIG
* @desc Get source file type by matching the source file name against source file type naming patterns
* or by specifying the id of a received source file.
* @example ...
* @ex_rslt "CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE"
**/
FUNCTION GET_SOURCE_FILE_CONFIG(pFileUri IN VARCHAR2 DEFAULT NULL
, pSourceFileReceivedKey IN NUMBER DEFAULT NULL
, pSourceFileConfigKey IN NUMBER DEFAULT NULL)
RETURN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a newly received source file in A_SOURCE_FILE_RECEIVED table.
* This overload automatically determines source file type from the file name.
* It returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2
)
RETURN PLS_INTEGER;
/**
* @name REGISTER_SOURCE_FILE_RECEIVED
* @desc Register a new new source file in A_SOURCE_FILE_RECEIVED table based on pSourceFileReceivedName and pSourceFileConfig.
* Then it returns the value of A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY column for newly added record.
* @example vSourceFileReceivedKey := FILE_MANAGER.REGISTER_SOURCE_FILE_RECEIVED(
* pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv'
* ,pSourceFileConfig => ...A_SOURCE_FILE_CONFIG%ROWTYPE... );
* @ex_rslt 3245
**/
FUNCTION REGISTER_SOURCE_FILE_RECEIVED (
pSourceFileReceivedName IN VARCHAR2,
pSourceFileConfig IN CT_MRDS.A_SOURCE_FILE_CONFIG%ROWTYPE
)
RETURN PLS_INTEGER;
/**
* @name SET_SOURCE_FILE_RECEIVED_STATUS
* @desc Set status of file in A_SOURCE_FILE_RECEIVED table - PROCESSING_STATUS column
* based on A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY
* and provided value of pStatus parameter
* @example exec FILE_MANAGER.SET_SOURCE_FILE_RECEIVED_STATUS(pSourceFileReceivedKey => 377, pStatus => 'READY_FOR_INGESTION');
**/
PROCEDURE SET_SOURCE_FILE_RECEIVED_STATUS(
pSourceFileReceivedKey IN PLS_INTEGER,
pStatus IN VARCHAR2
);
/**
* @name GET_EXTERNAL_TABLE_COLUMNS
* @desc Function used to get string with all table columns definitions based on pTargetTableTemplate "TEMPLATE TABLE" name.
* It used for creating "EXTERNAL TABLE" using CREATE_EXTERNAL_TABLE procedure.
* @example select FILE_MANAGER.GET_EXTERNAL_TABLE_COLUMNS(pTargetTableTemplate => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER') from dual;
* @ex_rslt "A_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "A_WORKFLOW_HISTORY_KEY" NUMBER(38,0) NOT NULL ENABLE,
* "REV_NUMBER" NUMBER(28,0),
* "REF_DATE" DATE,
* "FREE_TEXT" VARCHAR2(1000 CHAR),
* "MLF_BS_TOTAL" NUMBER(28,10),
* "DF_BS_TOTAL" NUMBER(28,10),
* "MLF_SF_TOTAL" NUMBER(28,10),
* "DF_SF_TOTAL" NUMBER(28,10)
**/
FUNCTION GET_EXTERNAL_TABLE_COLUMNS (
pTargetTableTemplate IN VARCHAR2
)
RETURN CLOB;
/**
* @name CREATE_EXTERNAL_TABLE
* @desc A wrapper procedure for DBMS_CLOUD.CREATE_EXTERNAL_TABLE which creates External Table
* MARS-1049: Added pEncoding parameter for CSV character set specification
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252')
* If provided, adds CHARACTERSET clause to external table definition
* @example
* begin
* FILE_MANAGER.CREATE_EXTERNAL_TABLE(
* pTableName => 'STANDING_FACILITIES_HEADER',
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER',
* pPrefix => 'ODS/LM/STANDING_FACILITIES_HEADER/',
* pBucketUri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/',
* pFileName => NULL,
* pDelimiter => ',',
* pEncoding => 'UTF8'
* );
* end;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pTableName IN VARCHAR2,
pTemplateTableName IN VARCHAR2,
pPrefix IN VARCHAR2,
pBucketUri IN VARCHAR2 DEFAULT ENV_MANAGER.gvInboxBucketUri,
pFileName IN VARCHAR2 DEFAULT NULL,
pDelimiter IN VARCHAR2 DEFAULT ',',
pEncoding IN VARCHAR2 DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name CREATE_EXTERNAL_TABLE
* @desc Creates External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.CREATE_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);;
**/
PROCEDURE CREATE_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_SOURCE_FILE_RECEIVED
* @desc A wrapper procedure for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
* It validate External table build upon single file
* provided by pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.VALIDATE_SOURCE_FILE_RECEIVED(pSourceFileReceivedKey => 377);
**/
PROCEDURE VALIDATE_SOURCE_FILE_RECEIVED
(
pSourceFileReceivedKey IN NUMBER
);
/**
* @name VALIDATE_EXTERNAL_TABLE
* @desc A wrapper function for DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE.
* It validates External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt FAILED
**/
FUNCTION VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name S_VALIDATE_EXTERNAL_TABLE
* @desc A function which checks if SELECT query reterns any rows.
* It trys to selects External Table provided by parameter pTableName.
* It returns: PASSED or FAILED.
* @example
* declare
* vStatus VARCHAR2(100);
* begin
* vStatus := FILE_MANAGER.S_VALIDATE_EXTERNAL_TABLE(pTableName => 'STANDING_FACILITIES_HEADER');
* DBMS_OUTPUT.PUT_LINE('vStatus = '||vStatus);
* end;
*
* @ex_rslt PASSED
**/
FUNCTION S_VALIDATE_EXTERNAL_TABLE(pTableName IN VARCHAR2)
RETURN VARCHAR2;
/**
* @name DROP_EXTERNAL_TABLE
* @desc It drops External Table for single file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* @example exec FILE_MANAGER.DROP_EXTERNAL_TABLE(pSourceFileReceivedKey => 377);
**/
PROCEDURE DROP_EXTERNAL_TABLE (
pSourceFileReceivedKey IN NUMBER
);
/**
* @name COPY_FILE
* @desc It copies file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS'
* @example exec FILE_MANAGER.COPY_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE COPY_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name MOVE_FILE
* @desc It moves file provided by
* pSourceFileReceivedKey parameter (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY)
* into destination provided by pDestination parameter.
* pDestination parameter allowed values are: 'ODS', 'QUARANTINE'
* @example exec FILE_MANAGER.MOVE_FILE(pSourceFileReceivedKey => 377, pDestination => 'ODS');
**/
PROCEDURE MOVE_FILE(
pSourceFileReceivedKey IN NUMBER,
pDestination IN VARCHAR2
);
/**
* @name DELETE_FOLDER_CONTENTS
* @desc It deletes all files from specified folder in the cloud storage.
* The procedure lists all objects in the specified folder prefix and deletes them one by one.
* pBucketArea parameter specifies which bucket to use: 'INBOX', 'DATA', 'ARCHIVE'
* pFolderPrefix parameter specifies the folder path within the bucket (e.g., 'C2D/UC_DISSEM/UC_NMA_DISSEM/')
* @example exec FILE_MANAGER.DELETE_FOLDER_CONTENTS(pBucketArea => 'INBOX', pFolderPrefix => 'C2D/UC_DISSEM/UC_NMA_DISSEM/');
**/
PROCEDURE DELETE_FOLDER_CONTENTS(
pBucketArea IN VARCHAR2,
pFolderPrefix IN VARCHAR2
);
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter.
* Ubmrella procedure that calls:
* - REGISTER_SOURCE_FILE_RECEIVED;
* - CREATE_EXTERNAL_TABLE;
* - VALIDATE_SOURCE_FILE_RECEIVED;
* - DROP_EXTERNAL_TABLE;
* - MOVE_FILE;
* @example exec FILE_MANAGER.PROCESS_SOURCE_FILE(pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
**/
PROCEDURE PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
;
/**
* @name PROCESS_SOURCE_FILE
* @desc It process file provided by pSourceFileReceivedName parameter and return processing result value.
* It returns (success/failure) => 0 / -(value).
* Ubmrella function that calls PROCESS_SOURCE_FILE procedure.
* @example
* declare
* vResult PLS_INTEGER;
* begin
* vResult := CT_MRDS.FILE_MANAGER.PROCESS_SOURCE_FILE(PSOURCEFILERECEIVEDNAME => 'INBOX/C2D/UC_DISSEM/UC_NMA_DISSEM/UC_NMA_DISSEM-277740.csv');
* DBMS_OUTPUT.PUT_LINE('vResult = ' || vResult);
* end;
* @ex_rslt 0
* -20021
**/
FUNCTION PROCESS_SOURCE_FILE(pSourceFileReceivedName IN VARCHAR2)
RETURN PLS_INTEGER;
/**
* @name GET_DATE_FORMAT
* @desc Returns date format for specified template table name and column name.
* Date is taken from configuration A_COLUMN_DATE_FORMAT table.
* @example select FILE_MANAGER.GET_DATE_FORMAT(
* pTemplateTableName => 'STANDING_FACILITIES_HEADER',
* pColumnName => 'SNAPSHOT_DATE')
* from dual;
* @ex_rslt DD/MM/YYYY HH24:MI:SS
**/
FUNCTION GET_DATE_FORMAT(
pTemplateTableName IN VARCHAR2,
pColumnName IN VARCHAR2
) RETURN VARCHAR2;
/**
* @name GENERATE_EXTERNAL_TABLE_PARAMS
* @desc It builds two strings: pColumnList and pFieldList for specified Template Table name, by parameter: pTemplateTableName.
* @example
* declare
* vColumnList CLOB;
* vFieldList CLOB;
* begin
* FILE_MANAGER.GENERATE_EXTERNAL_TABLE_PARAMS (
* pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES_HEADER'
* ,pColumnList => vColumnList
* ,pFieldList => vFieldList
* );
* DBMS_OUTPUT.PUT_LINE('vColumnList = '||vColumnList);
* DBMS_OUTPUT.PUT_LINE('vFieldList = '||vFieldList);
* end;
* /
**/
PROCEDURE GENERATE_EXTERNAL_TABLE_PARAMS (
pTemplateTableName IN VARCHAR2,
pColumnList OUT CLOB,
pFieldList OUT CLOB
);
/**
* @name ADD_SOURCE
* @desc Insert a new record to A_SOURCE table.
* pSourceKey is a PRIMARY KEY value.
**/
PROCEDURE ADD_SOURCE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE,
pSourceName IN CT_MRDS.A_SOURCE.SOURCE_NAME%TYPE
);
/**
* @name DELETE_SOURCE_CASCADE
* @desc Safely deletes a SOURCE specified by pSourceKey parameter from A_SOURCE table and all dependent tables:
* - A_SOURCE_FILE_CONFIG
* - A_SOURCE_FILE_RECEIVED
* - A_COLUMN_DATE_FORMAT (only if template table is not shared with other source systems)
* The procedure checks if template tables are shared before deleting date format configurations.
* If a template table is used by multiple source systems, date formats are preserved.
* @example CALL CT_MRDS.FILE_MANAGER.DELETE_SOURCE_CASCADE(pSourceKey => 'TEST_SYS');
**/
PROCEDURE DELETE_SOURCE_CASCADE (
pSourceKey IN CT_MRDS.A_SOURCE.A_SOURCE_KEY%TYPE
);
/**
* @name GET_CONTAINER_SOURCE_FILE_CONFIG_KEY
* @desc For specified parameter pSourceFileId (A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID)
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY for related CONTAINER record.
* @example select FILE_MANAGER.GET_CONTAINER_SOURCE_FILE_CONFIG_KEY(
* pSourceFileId => 'UC_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_CONTAINER_SOURCE_FILE_CONFIG_KEY (
pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name GET_SOURCE_FILE_CONFIG_KEY
* @desc For specified input parameters,
* it returns A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY.
* @example select FILE_MANAGER.GET_SOURCE_FILE_CONFIG_KEY (
* pSourceFileType => 'INPUT'
* ,pSourceFileId => 'UC_DISSEM'
* ,pTableId => 'UC_NMA_DISSEM')
* from dual;
* @ex_rslt 126
**/
FUNCTION GET_SOURCE_FILE_CONFIG_KEY (
pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE DEFAULT 'INPUT'
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE
) RETURN PLS_INTEGER;
/**
* @name ADD_SOURCE_FILE_CONFIG
* @desc Insert a new record to A_SOURCE_FILE_CONFIG table.
* MARS-1049: Added pEncoding parameter for CSV character set specification.
* @param pEncoding - Character set encoding for CSV files (e.g., 'UTF8', 'WE8MSWIN1252', 'EE8ISO8859P2')
* If NULL, no CHARACTERSET clause is added to external table definitions
* @example CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
* pSourceKey => 'C2D', pSourceFileType => 'INPUT',
* pSourceFileId => 'UC_DISSEM', pTableId => 'METADATA_LOADS',
* pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
* pEncoding => 'UTF8'
* );
**/
PROCEDURE ADD_SOURCE_FILE_CONFIG (
pSourceKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_KEY%TYPE
,pSourceFileType IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_TYPE%TYPE
,pSourceFileId IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_ID%TYPE
,pSourceFileDesc IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_DESC%TYPE
,pSourceFileNamePattern IN CT_MRDS.A_SOURCE_FILE_CONFIG.SOURCE_FILE_NAME_PATTERN%TYPE
,pTableId IN CT_MRDS.A_SOURCE_FILE_CONFIG.TABLE_ID%TYPE DEFAULT NULL
,pTemplateTableName IN CT_MRDS.A_SOURCE_FILE_CONFIG.TEMPLATE_TABLE_NAME%TYPE DEFAULT NULL
,pContainerFileKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.CONTAINER_FILE_KEY%TYPE DEFAULT NULL
,pEncoding IN CT_MRDS.A_SOURCE_FILE_CONFIG.ENCODING%TYPE DEFAULT NULL -- MARS-1049: NOWY PARAMETR
);
/**
* @name ADD_COLUMN_DATE_FORMAT
* @desc Insert a new record to A_COLUMN_DATE_FORMAT table.
**/
PROCEDURE ADD_COLUMN_DATE_FORMAT (
pTemplateTableName IN CT_MRDS.A_COLUMN_DATE_FORMAT.TEMPLATE_TABLE_NAME%TYPE
,pColumnName IN CT_MRDS.A_COLUMN_DATE_FORMAT.COLUMN_NAME%TYPE
,pDateFormat IN CT_MRDS.A_COLUMN_DATE_FORMAT.DATE_FORMAT%TYPE
);
/**
* @name GET_BUCKET_URI
* @desc Function used to get string with bucket http url.
* Possible input values for pBucketArea are: 'INBOX', 'ODS', 'DATA', 'ARCHIVE'
* @example select FILE_MANAGER.GET_BUCKET_URI(pBucketArea => 'ODS') from dual;
* @ex_rslt https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frcnomajoc7v/b/mrds_data_tst/o/
**/
FUNCTION GET_BUCKET_URI(pBucketArea VARCHAR2)
RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_CONFIG_INFO
* @desc Function returns details about A_SOURCE_FILE_CONFIG record
* for specified pSourceFileConfigKey (A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY).
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_CONFIG_INFO (
* pSourceFileConfigKey => 128
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
* @ex_rslt
* Details about File Configuration:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 128
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Details about related Container Config:
* --------------------------------
* A_SOURCE_FILE_CONFIG_KEY = 126
* A_SOURCE_KEY = C2D
* ...
* --------------------------------
*
* Column Date Format config entries:
* --------------------------------
* TEMPLATE_TABLE_NAME = CT_ET_TEMPLATES.C2D_UC_MA_DISSEM
* ...
* --------------------------------
**/
FUNCTION GET_DET_SOURCE_FILE_CONFIG_INFO (
pSourceFileConfigKey IN CT_MRDS.A_SOURCE_FILE_CONFIG.A_SOURCE_FILE_CONFIG_KEY%TYPE
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_SOURCE_FILE_RECEIVED_INFO
* @desc Function returns details about A_SOURCE_FILE_RECEIVED record
* for specified pSourceFileReceivedKey (A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY).
* If pIncludeConfigInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeContainerInfo is <> 0 it returns additional info about related Container config record (A_SOURCE_FILE_CONFIG)
* If pIncludeColumnFormatInfo is <> 0 it returns additional info about related ColumnFormat config record (A_COLUMN_DATE_FORMAT)
* @example select FILE_MANAGER.GET_DET_SOURCE_FILE_RECEIVED_INFO (
* pSourceFileReceivedKey => 377
* ,pIncludeConfigInfo => 1
* ,pIncludeContainerInfo => 1
* ,pIncludeColumnFormatInfo => 1
* ) from dual;
*
**/
FUNCTION GET_DET_SOURCE_FILE_RECEIVED_INFO (
pSourceFileReceivedKey IN CT_MRDS.A_SOURCE_FILE_RECEIVED.A_SOURCE_FILE_RECEIVED_KEY%TYPE
,pIncludeConfigInfo IN PLS_INTEGER DEFAULT 1
,pIncludeContainerInfo IN PLS_INTEGER DEFAULT 1
,pIncludeColumnFormatInfo IN PLS_INTEGER DEFAULT 1
) RETURN VARCHAR2;
/**
* @name GET_DET_USER_LOAD_OPERATIONS
* @desc Function returns details from USER_LOAD_OPERATIONS table
* for specified pOperationId.
* @example select FILE_MANAGER.GET_DET_USER_LOAD_OPERATIONS (pOperationId => 3608) from dual;
* @ex_rslt
* Details about USER_LOAD_OPERATIONS where ID = 3608
* --------------------------------
* ID = 3608
* TYPE = VALIDATE
* SID = 31260
* SERIAL# = 52915
* START_TIME = 2025-05-20 10.08.24.436983 EUROPE/BELGRADE
* UPDATE_TIME = 2025-05-20 10.08.24.458643 EUROPE/BELGRADE
* STATUS = FAILED
* OWNER_NAME = CT_MRDS
* TABLE_NAME = STANDING_FACILITIES_HEADER
* PARTITION_NAME =
* SUBPARTITION_NAME =
* FILE_URI_LIST =
* ROWS_LOADED =
* LOGFILE_TABLE = VALIDATE$3608_LOG
* BADFILE_TABLE = VALIDATE$3608_BAD
* STATUS_TABLE =
* TEMPEXT_TABLE =
* CREDENTIAL_NAME =
* EXPIRATION_TIME = 2025-05-22 10.08.24.436983000 EUROPE/BELGRADE
* --------------------------------
**/
FUNCTION GET_DET_USER_LOAD_OPERATIONS (
pOperationId PLS_INTEGER
) RETURN VARCHAR2;
/**
* @name ANALYZE_VALIDATION_ERRORS
* @desc Wrapper function that analyzes validation errors for a source file using its received key.
* Automatically derives template schema, table name, CSV URI and validation log table
* from file metadata and calls ENV_MANAGER.ANALYZE_VALIDATION_ERRORS.
* @example SELECT FILE_MANAGER.ANALYZE_VALIDATION_ERRORS(63) FROM DUAL;
* @ex_rslt Detailed validation analysis report with column mismatches and solutions
**/
FUNCTION ANALYZE_VALIDATION_ERRORS(
pSourceFileReceivedKey IN NUMBER
) RETURN VARCHAR2;
---------------------------------------------------------------------------------------------------------------------------
-- PACKAGE VERSION MANAGEMENT FUNCTIONS
---------------------------------------------------------------------------------------------------------------------------
/**
* @name GET_VERSION
* @desc Returns the current version number of the FILE_MANAGER package.
* Uses semantic versioning format (MAJOR.MINOR.PATCH).
* @example SELECT FILE_MANAGER.GET_VERSION() FROM DUAL;
* @ex_rslt 3.2.0
**/
FUNCTION GET_VERSION RETURN VARCHAR2;
/**
* @name GET_BUILD_INFO
* @desc Returns comprehensive build information including version, build date, and author.
* Uses centralized ENV_MANAGER.GET_PACKAGE_VERSION_INFO function.
* @example SELECT FILE_MANAGER.GET_BUILD_INFO() FROM DUAL;
* @ex_rslt Package: FILE_MANAGER
* Version: 3.2.0
* Build Date: 2025-10-22 16:30:00
* Author: Grzegorz Michalski
**/
FUNCTION GET_BUILD_INFO RETURN VARCHAR2;
/**
* @name GET_VERSION_HISTORY
* @desc Returns complete version history with all releases and changes.
* Uses centralized ENV_MANAGER.FORMAT_VERSION_HISTORY function.
* @example SELECT FILE_MANAGER.GET_VERSION_HISTORY() FROM DUAL;
* @ex_rslt FILE_MANAGER Version History:
* 3.2.0 (2025-10-22): Added package versioning system...
**/
FUNCTION GET_VERSION_HISTORY RETURN VARCHAR2;
END;
/
/

View File

@@ -0,0 +1,115 @@
-- ============================================================================
-- MARS-1409 Package Version Tracking
-- ============================================================================
-- Purpose: Record package versions in A_PACKAGE_VERSION_TRACKING table
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT ============================================================================
PROMPT Recording Package Versions
PROMPT ============================================================================
DECLARE
v_file_manager_version VARCHAR2(50);
v_file_manager_build VARCHAR2(100);
v_env_manager_version VARCHAR2(50);
v_env_manager_build VARCHAR2(100);
v_file_archiver_version VARCHAR2(50);
v_file_archiver_build VARCHAR2(100);
v_data_exporter_version VARCHAR2(50);
v_data_exporter_build VARCHAR2(500);
BEGIN
-- Get FILE_MANAGER version
BEGIN
v_file_manager_version := CT_MRDS.FILE_MANAGER.GET_VERSION();
v_file_manager_build := CT_MRDS.FILE_MANAGER.GET_BUILD_INFO();
DBMS_OUTPUT.PUT_LINE('FILE_MANAGER Version: ' || v_file_manager_version);
DBMS_OUTPUT.PUT_LINE('FILE_MANAGER Build: ' || v_file_manager_build);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve FILE_MANAGER version');
END;
-- Get ENV_MANAGER version
BEGIN
v_env_manager_version := CT_MRDS.ENV_MANAGER.GET_VERSION();
v_env_manager_build := CT_MRDS.ENV_MANAGER.GET_BUILD_INFO();
DBMS_OUTPUT.PUT_LINE('ENV_MANAGER Version: ' || v_env_manager_version);
DBMS_OUTPUT.PUT_LINE('ENV_MANAGER Build: ' || v_env_manager_build);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve ENV_MANAGER version');
END;
-- Get FILE_ARCHIVER version
BEGIN
v_file_archiver_version := CT_MRDS.FILE_ARCHIVER.GET_VERSION();
v_file_archiver_build := CT_MRDS.FILE_ARCHIVER.GET_BUILD_INFO();
DBMS_OUTPUT.PUT_LINE('FILE_ARCHIVER Version: ' || v_file_archiver_version);
DBMS_OUTPUT.PUT_LINE('FILE_ARCHIVER Build: ' || v_file_archiver_build);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve FILE_ARCHIVER version');
END;
-- Get DATA_EXPORTER version
BEGIN
v_data_exporter_version := CT_MRDS.DATA_EXPORTER.GET_VERSION();
v_data_exporter_build := CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO();
DBMS_OUTPUT.PUT_LINE('DATA_EXPORTER Version: ' || v_data_exporter_version);
DBMS_OUTPUT.PUT_LINE('DATA_EXPORTER Build: ' || v_data_exporter_build);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('WARNING: Could not retrieve DATA_EXPORTER version');
END;
-- Insert version records into A_PACKAGE_VERSION_TRACKING
BEGIN
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
(PACKAGE_OWNER, PACKAGE_NAME, PACKAGE_TYPE, PACKAGE_VERSION,
PACKAGE_BUILD_DATE, PACKAGE_AUTHOR, TRACKING_DATE, TRACKED_BY_USER, TRACKED_BY_MODULE)
VALUES (:1, :2, :3, :4, :5, :6, SYSTIMESTAMP, USER, :7)'
USING 'CT_MRDS', 'FILE_MANAGER', 'BOTH', v_file_manager_version,
'', '', 'MARS-1409';
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
(PACKAGE_OWNER, PACKAGE_NAME, PACKAGE_TYPE, PACKAGE_VERSION,
PACKAGE_BUILD_DATE, PACKAGE_AUTHOR, TRACKING_DATE, TRACKED_BY_USER, TRACKED_BY_MODULE)
VALUES (:1, :2, :3, :4, :5, :6, SYSTIMESTAMP, USER, :7)'
USING 'CT_MRDS', 'ENV_MANAGER', 'BOTH', v_env_manager_version,
'', '', 'MARS-1409';
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
(PACKAGE_OWNER, PACKAGE_NAME, PACKAGE_TYPE, PACKAGE_VERSION,
PACKAGE_BUILD_DATE, PACKAGE_AUTHOR, TRACKING_DATE, TRACKED_BY_USER, TRACKED_BY_MODULE)
VALUES (:1, :2, :3, :4, :5, :6, SYSTIMESTAMP, USER, :7)'
USING 'CT_MRDS', 'FILE_ARCHIVER', 'BOTH', v_file_archiver_version,
'', '', 'MARS-1409';
EXECUTE IMMEDIATE 'INSERT INTO CT_MRDS.A_PACKAGE_VERSION_TRACKING
(PACKAGE_OWNER, PACKAGE_NAME, PACKAGE_TYPE, PACKAGE_VERSION,
PACKAGE_BUILD_DATE, PACKAGE_AUTHOR, TRACKING_DATE, TRACKED_BY_USER, TRACKED_BY_MODULE)
VALUES (:1, :2, :3, :4, :5, :6, SYSTIMESTAMP, USER, :7)'
USING 'CT_MRDS', 'DATA_EXPORTER', 'BOTH', v_data_exporter_version,
'', '', 'MARS-1409';
COMMIT;
DBMS_OUTPUT.PUT_LINE('Package version tracking recorded successfully');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR: Could not record version tracking - ' || SQLERRM);
RAISE;
END;
END;
/
PROMPT
PROMPT ============================================================================
PROMPT Version Tracking Complete
PROMPT ============================================================================

View File

@@ -0,0 +1,63 @@
-- ============================================================================
-- MARS-1409 Package Version Verification
-- ============================================================================
-- Purpose: Verify package versions after installation
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
SET VERIFY OFF
SET FEEDBACK OFF
SET ECHO OFF
PROMPT
PROMPT ============================================================================
PROMPT Package Version Verification
PROMPT ============================================================================
-- FILE_MANAGER version
PROMPT
PROMPT CT_MRDS.FILE_MANAGER Package:
SELECT CT_MRDS.FILE_MANAGER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.FILE_MANAGER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- ENV_MANAGER version
PROMPT
PROMPT CT_MRDS.ENV_MANAGER Package:
SELECT CT_MRDS.ENV_MANAGER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.ENV_MANAGER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- FILE_ARCHIVER version
PROMPT
PROMPT CT_MRDS.FILE_ARCHIVER Package:
SELECT CT_MRDS.FILE_ARCHIVER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.FILE_ARCHIVER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- DATA_EXPORTER version
PROMPT
PROMPT CT_MRDS.DATA_EXPORTER Package:
SELECT CT_MRDS.DATA_EXPORTER.GET_VERSION() AS VERSION FROM DUAL;
SELECT CT_MRDS.DATA_EXPORTER.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
-- Package compilation status
PROMPT
PROMPT Package Compilation Status:
SELECT object_name, object_type, status, last_ddl_time
FROM all_objects
WHERE owner = 'CT_MRDS'
AND object_name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
AND object_type IN ('PACKAGE', 'PACKAGE BODY')
ORDER BY object_name, object_type;
-- Check for compilation errors
PROMPT
PROMPT Compilation Errors (if any):
SELECT name, type, line, position, text
FROM all_errors
WHERE owner = 'CT_MRDS'
AND name IN ('FILE_MANAGER', 'ENV_MANAGER', 'FILE_ARCHIVER', 'DATA_EXPORTER')
ORDER BY name, type, line, position;
PROMPT
PROMPT ============================================================================
PROMPT Verification Complete
PROMPT ============================================================================

View File

@@ -0,0 +1,5 @@
# Exclude temporary folders from version control
confluence/
log/
test/
mock_data/

View File

@@ -0,0 +1,55 @@
-- ============================================================================
-- MARS-1005-PREHOOK Installation Script 00: DATA_EXPORTER Package
-- ============================================================================
-- Purpose: Deploy updated DATA_EXPORTER package (SPEC + BODY) v2.17.0
-- PARQUET FIX: Added pFormat parameter to buildQueryWithDateFormats.
-- REPLACE(col,CHR(34)) now applied only when pFormat=CSV.
-- EXPORT_TABLE_DATA_BY_DATE passes PARQUET - string data was being
-- corrupted (single " doubled to "") in Parquet binary files.
-- v2.16.0 RFC 4180 FIX remains intact for CSV path.
-- Schema: CT_MRDS
-- Object: PACKAGE DATA_EXPORTER
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT ============================================================================
PROMPT MARS-1005-PREHOOK: Installing CT_MRDS.DATA_EXPORTER Package
PROMPT ============================================================================
PROMPT Package: CT_MRDS.DATA_EXPORTER
PROMPT Version: 2.16.0 -> 2.17.0
PROMPT Change: PARQUET FIX - pFormat param added to buildQueryWithDateFormats.
PROMPT REPLACE(col,CHR(34)) applied only when pFormat=CSV.
PROMPT Parquet path no longer corrupts strings containing double quotes.
PROMPT ============================================================================
PROMPT
PROMPT Step 1: Deploy Package Specification
PROMPT ============================================================================
@@new_version\DATA_EXPORTER.pkg
PROMPT
PROMPT Package specification deployment completed.
PROMPT
PROMPT
PROMPT Step 2: Deploy Package Body
PROMPT ============================================================================
@@new_version\DATA_EXPORTER.pkb
PROMPT
PROMPT Package body deployment completed.
PROMPT
PROMPT
PROMPT ============================================================================
PROMPT DATA_EXPORTER Package installation completed (v2.17.0)
PROMPT ============================================================================
PROMPT
--=============================================================================================================================
-- End of Script
--=============================================================================================================================

View File

@@ -0,0 +1,49 @@
-- ============================================================================
-- MARS-1005-PREHOOK Rollback Script 90: DATA_EXPORTER Package
-- ============================================================================
-- Purpose: Restore DATA_EXPORTER package (SPEC + BODY) to v2.6.3
-- Reverting the RFC 4180 fix (escape=true removal).
-- Schema: CT_MRDS
-- Object: PACKAGE DATA_EXPORTER
-- ============================================================================
SET SERVEROUTPUT ON SIZE UNLIMITED
PROMPT
PROMPT ============================================================================
PROMPT MARS-1005-PREHOOK: Rolling back CT_MRDS.DATA_EXPORTER Package
PROMPT ============================================================================
PROMPT Package: CT_MRDS.DATA_EXPORTER
PROMPT Version: 2.15.0 -> 2.14.0 (ROLLBACK)
PROMPT Change: Restoring escape=true in DBMS_CLOUD.EXPORT_DATA CSV format
PROMPT ============================================================================
PROMPT
PROMPT Step 1: Restore Package Specification
PROMPT ============================================================================
@@rollback_version\DATA_EXPORTER.pkg
PROMPT
PROMPT Package specification rollback completed.
PROMPT
PROMPT
PROMPT Step 2: Restore Package Body
PROMPT ============================================================================
@@rollback_version\DATA_EXPORTER.pkb
PROMPT
PROMPT Package body rollback completed.
PROMPT
PROMPT
PROMPT ============================================================================
PROMPT DATA_EXPORTER Package rollback completed (v2.6.3 restored)
PROMPT ============================================================================
PROMPT
--=============================================================================================================================
-- End of Script
--=============================================================================================================================

Some files were not shown because too many files have changed in this diff Show More