974 lines
42 KiB
Markdown
974 lines
42 KiB
Markdown
# GitHub Copilot Instructions - Oracle Database File Management System
|
|
|
|
## Critical Development Workflow
|
|
|
|
### **CODE-FIRST DEVELOPMENT PRINCIPLE**
|
|
**MANDATORY**: Always prepare code changes in files BEFORE deploying to database.
|
|
|
|
**Correct Workflow:**
|
|
1. **ANALYZE** current code structure and identify required changes
|
|
2. **UPDATE STRUCTURE DEFINITIONS** in source files (.sql table/view/index definitions)
|
|
3. **PREPARE MIGRATION SCRIPTS** (ALTER TABLE, CREATE INDEX, etc.) for existing databases
|
|
4. **MODIFY PACKAGE CODE** (.pkb, .pks) to utilize new structures
|
|
5. **VALIDATE** syntax and logic in all code files
|
|
6. **TEST** changes through file deployment scripts
|
|
7. **DEPLOY** migration scripts first, then updated packages to database
|
|
|
|
**NEVER:**
|
|
- Execute DDL/DML directly on database without corresponding file updates
|
|
- Create database objects without updating source control files
|
|
- Modify database structures ad-hoc through SQL commands
|
|
- Skip creation of migration scripts for existing environments
|
|
|
|
**Two-Tier File Management:**
|
|
- **Structure Definitions** - Complete CREATE statements in `database/{schema}/tables/`
|
|
- **Migration Scripts** - Incremental ALTER statements in `database/migrations/`
|
|
|
|
**File-First Approach Benefits:**
|
|
- Version control integrity maintained
|
|
- Reproducible deployments across environments
|
|
- Code review capability for all changes
|
|
- Rollback capability through source control
|
|
- Team collaboration on structured changes
|
|
- Clear separation between baseline and incremental changes
|
|
|
|
---
|
|
|
|
## Project Architecture Overview
|
|
|
|
This Oracle database system provides enterprise-grade file processing and data export capabilities for Oracle Cloud Infrastructure (OCI). The architecture centers around multi-schema data pipelines with bucket-based storage management and secure cloud wrapper functionality.
|
|
|
|
### Dual System Architecture
|
|
- **File Management System** (Primary) - Enterprise data export and processing capabilities
|
|
- **Cloud Wrapper System** (Foundation) - Secure DBMS_CLOUD access wrapper for controlled cloud operations
|
|
|
|
### Core Components
|
|
|
|
**Primary Packages:**
|
|
- `CT_MRDS.FILE_MANAGER` - Main file processing and validation orchestration package (AUTHID CURRENT_USER)
|
|
- `CT_MRDS.DATA_EXPORTER` - Data export operations with Hive-style partitioning support (separated from FILE_MANAGER)
|
|
- `CT_MRDS.ENV_MANAGER` - Environment configuration and error management
|
|
- `CT_MRDS.FILE_ARCHIVER` - Data archiving and archival operations
|
|
- `ODS.FILE_MANAGER_ODS` - Wrapper package with DEFINER rights for external table creation (RECOMMENDED)
|
|
- `MRDS_LOADER.cloud_wrapper` - Secure wrapper around DBMS_CLOUD functions
|
|
- `MRDS_LOADER.DATA_REPLICATOR` - Cloud data replication and export operations
|
|
|
|
**Package Architecture Recommendation:**
|
|
- **RECOMMENDED**: Use `ODS.FILE_MANAGER_ODS` (AUTHID DEFINER) - works from any user context
|
|
- **ALTERNATIVE**: Use `CT_MRDS.FILE_MANAGER` (AUTHID CURRENT_USER) - requires ODS user context for CREATE_EXTERNAL_TABLE
|
|
- **DATA_EXPORTER**: Export procedures moved from FILE_MANAGER to dedicated DATA_EXPORTER package for better modularity
|
|
|
|
**Schema Architecture:**
|
|
- **CT_MRDS** - Core file management and processing schema
|
|
- **CT_ODS** - Operational Data Store with load history tracking
|
|
- **ODS** - External table creation schema with DEFINER privileges
|
|
- **OU_TOP** - Business data schema (e.g., AGGREGATED_ALLOTMENT table)
|
|
- **MRDS_LOADER** - Cloud operations schema with DBMS_CLOUD access and package ownership
|
|
- **CT_ET_TEMPLATES** - Template table definitions
|
|
- **ADMIN** - Database administrator with full privileges (REQUIRED for schema creation)
|
|
- **MICHALZ** - Standard development user with execute privileges on cloud_wrapper
|
|
|
|
### Critical Architecture Pattern
|
|
**Privilege Separation Model**: MRDS_LOADER owns cloud_wrapper package with DBMS_CLOUD access, other users get EXECUTE grants for controlled cloud operations.
|
|
|
|
### File Processing Configuration
|
|
**Core Configuration Procedures:**
|
|
- `FILE_MANAGER.ADD_SOURCE` - Register new source systems (e.g., 'C2D', 'LM', 'TOP')
|
|
- `FILE_MANAGER.ADD_SOURCE_FILE_CONFIG` - Configure file processing rules and naming patterns
|
|
- `FILE_MANAGER.ADD_COLUMN_DATE_FORMAT` - Set up date format handling for specific columns
|
|
- `FILE_MANAGER.DELETE_SOURCE_CASCADE` - Safe removal of source systems with cascade delete
|
|
- `FILE_MANAGER.ANALYZE_VALIDATION_ERRORS` - Analyze file validation failures with detailed reporting
|
|
|
|
**Official Path Patterns (MANDATORY):**
|
|
- **INBOX**: `'INBOX/{SOURCE}/{SOURCE_FILE_ID}/{TABLE_NAME}/'` (3-level path)
|
|
- **ODS**: `'ODS/{SOURCE}/{TABLE_NAME}/'` (2-level path, no SOURCE_FILE_ID)
|
|
- **ARCHIVE**: `'ARCHIVE/{SOURCE}/{TABLE_NAME}/'` (2-level path, no SOURCE_FILE_ID)
|
|
|
|
**File Processing Workflow:**
|
|
- `FILE_MANAGER.PROCESS_SOURCE_FILE` - Main umbrella procedure (6-step validation workflow)
|
|
- Status tracking: RECEIVED → VALIDATED → READY_FOR_INGESTION → INGESTED → ARCHIVED
|
|
- Automatic file movement: INBOX → ODS/DATA → ARCHIVE (with Hive-style partitioning)
|
|
|
|
## Data Export System
|
|
|
|
### Daily Data Flow Workflow
|
|
|
|
**1. Data Ingestion (INBOX)**
|
|
Daily data files arrive in the INBOX bucket area where they undergo validation:
|
|
```sql
|
|
-- File validation and processing (umbrella procedure with 6 steps)
|
|
FILE_MANAGER.PROCESS_SOURCE_FILE(
|
|
pSourceFileReceivedName => 'INBOX/C2D/UC_DISSEM/A_UC_DISSEM_METADATA_LOADS/data_file.csv'
|
|
);
|
|
```
|
|
|
|
**PROCESS_SOURCE_FILE Workflow (6 Steps):**
|
|
1. **REGISTER_SOURCE_FILE_RECEIVED** - Register file and extract metadata
|
|
2. **CREATE_EXTERNAL_TABLE** - Create temporary external table for validation
|
|
3. **VALIDATE_SOURCE_FILE_RECEIVED** - Comprehensive data validation (column count, data types, business rules)
|
|
4. **DROP_EXTERNAL_TABLE** - Clean up temporary external table
|
|
5. **MOVE_FILE** - Relocate from INBOX to ODS bucket
|
|
6. **SET_SOURCE_FILE_RECEIVED_STATUS** - Update status to 'READY_FOR_INGESTION'
|
|
|
|
**Status Progression:** RECEIVED → VALIDATED → READY_FOR_INGESTION → INGESTED → ARCHIVED
|
|
|
|
- Files are validated for structure, format, and business rules using template tables
|
|
- Successfully validated files are moved to ODS/DATA bucket area and prepared for Airflow+DBT processing
|
|
- Failed validation results in error logging with detailed analysis via `ANALYZE_VALIDATION_ERRORS()` function
|
|
- Comprehensive error handling for column mismatches, file access issues, and configuration problems
|
|
|
|
**2. Data Processing (ODS/DATA)**
|
|
Validated files are processed and stored in the operational data area for further operations.
|
|
|
|
**3. Legacy Data Migration (DATA_EXPORTER)**
|
|
The DATA_EXPORTER package handles migration of data from legacy systems with three export scenarios:
|
|
|
|
**Scenario 1: Complete DATA Export (CSV)**
|
|
- All data exported to DATA bucket area in CSV format
|
|
- No implicit partitioning applied
|
|
- Used for simple data migrations
|
|
|
|
**Scenario 2: Split Export (DATA + ARCHIVE)**
|
|
- Partial data to DATA bucket area (CSV format)
|
|
- Partial data to ARCHIVE bucket area (Parquet with Hive-style partitioning)
|
|
- Requires two separate export operations
|
|
|
|
**Scenario 3: Complete ARCHIVE Export (Parquet)**
|
|
- All data exported to ARCHIVE bucket area
|
|
- Parquet format with Hive-style partitioning (PARTITION_YEAR=/PARTITION_MONTH=)
|
|
- Used for long-term storage and analytics
|
|
|
|
### Bucket Area Mapping
|
|
The system uses logical bucket areas that map to physical OCI buckets via `GET_BUCKET_URI()`:
|
|
- `'INBOX'` → gvInboxBucketUri (incoming data validation)
|
|
- `'ODS'` → gvDataBucketUri (operational data processing)
|
|
- `'DATA'` → gvDataBucketUri (processed data - CSV format)
|
|
- `'ARCHIVE'` → gvArchiveBucketUri (archived data - Parquet with partitioning)
|
|
|
|
### Export Procedures
|
|
|
|
**Scenario 1: Complete DATA Export (CSV Format)**
|
|
```sql
|
|
-- Export all data to DATA bucket area in CSV format without partitioning
|
|
FILE_MANAGER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
|
|
pSchemaName => 'OU_TOP',
|
|
pTableName => 'AGGREGATED_ALLOTMENT',
|
|
pKeyColumnName => 'A_WORKFLOW_HISTORY_KEY',
|
|
pBucketArea => 'DATA',
|
|
pFolderName => 'legacy_migration',
|
|
pMinDate => DATE '2024-01-01',
|
|
pMaxDate => SYSDATE
|
|
);
|
|
```
|
|
|
|
**Scenario 2: Split Export (DATA CSV + ARCHIVE Parquet)**
|
|
```sql
|
|
-- Part 1: Export recent data to DATA bucket (CSV format)
|
|
FILE_MANAGER.EXPORT_TABLE_DATA_TO_CSV_BY_DATE(
|
|
pSchemaName => 'OU_TOP',
|
|
pTableName => 'AGGREGATED_ALLOTMENT',
|
|
pKeyColumnName => 'A_WORKFLOW_HISTORY_KEY',
|
|
pBucketArea => 'DATA',
|
|
pFolderName => 'current_data',
|
|
pMinDate => DATE '2024-01-01',
|
|
pMaxDate => SYSDATE
|
|
);
|
|
|
|
-- Part 2: Export historical data to ARCHIVE bucket (Parquet with partitioning)
|
|
DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
|
|
pSchemaName => 'OU_TOP',
|
|
pTableName => 'AGGREGATED_ALLOTMENT',
|
|
pKeyColumnName => 'A_WORKFLOW_HISTORY_KEY',
|
|
pBucketArea => 'ARCHIVE',
|
|
pFolderName => 'historical_data',
|
|
pMinDate => DATE '2020-01-01',
|
|
pMaxDate => DATE '2023-12-31'
|
|
);
|
|
```
|
|
|
|
**Scenario 3: Complete ARCHIVE Export (Parquet with Partitioning)**
|
|
```sql
|
|
-- Export all data to ARCHIVE bucket with Hive-style partitioning
|
|
DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
|
|
pSchemaName => 'OU_TOP',
|
|
pTableName => 'AGGREGATED_ALLOTMENT',
|
|
pKeyColumnName => 'A_WORKFLOW_HISTORY_KEY',
|
|
pBucketArea => 'ARCHIVE',
|
|
pFolderName => 'complete_export',
|
|
pMinDate => DATE '2020-01-01',
|
|
pMaxDate => SYSDATE
|
|
);
|
|
-- Creates: complete_export/PARTITION_YEAR=2024/PARTITION_MONTH=01/*.parquet
|
|
```
|
|
|
|
**Daily Processing Workflow**
|
|
```sql
|
|
-- Standard daily file processing procedure
|
|
FILE_MANAGER.EXPORT_TABLE_DATA_BY_DATE(
|
|
pSchemaName => 'CT_MRDS',
|
|
pTableName => 'MY_TABLE',
|
|
pBucketArea => 'DATA',
|
|
pFolderName => 'daily_processing'
|
|
);
|
|
```
|
|
|
|
## Critical Implementation Patterns
|
|
|
|
### Error Handling Architecture
|
|
The system uses centralized error management through `ENV_MANAGER`:
|
|
|
|
**CRITICAL**: All error handling modifications must be implemented in source files first.
|
|
|
|
```sql
|
|
-- CORRECT: Use constant concatenation to avoid duplication
|
|
-- UPDATE FILE: database/CT_MRDS/packages/ENV_MANAGER.sql
|
|
FUNCTION GET_BUCKET_URI(pBucketName VARCHAR2) RETURN VARCHAR2 IS
|
|
BEGIN
|
|
CASE pBucketName
|
|
WHEN 'INBOX' THEN RETURN ENV_MANAGER.gvInboxBucketUri;
|
|
-- ... other cases
|
|
ELSE
|
|
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_INVALID_BUCKET_AREA,
|
|
ENV_MANAGER.MSG_INVALID_BUCKET_AREA || ' Provided: ''' || pBucketName || '''');
|
|
END CASE;
|
|
END;
|
|
|
|
-- WRONG: Never modify global MSG_ variables (causes duplication in same session)
|
|
-- ENV_MANAGER.MSG_INVALID_BUCKET_AREA := ENV_MANAGER.MSG_INVALID_BUCKET_AREA || '...';
|
|
|
|
-- Structured exception handling with local variables
|
|
EXCEPTION
|
|
WHEN ENV_MANAGER.ERR_TABLE_NOT_EXISTS THEN
|
|
vgMsgTmp := ENV_MANAGER.MSG_TABLE_NOT_EXISTS || ': ' || vTableName;
|
|
ENV_MANAGER.LOG_PROCESS_EVENT(vgMsgTmp, 'ERROR', vParameters);
|
|
RAISE_APPLICATION_ERROR(ENV_MANAGER.CODE_TABLE_NOT_EXISTS, vgMsgTmp);
|
|
```
|
|
|
|
**Error Message Pattern**: Always use `MSG_*` constants for fresh messages, never modify global variables.
|
|
|
|
### AUTHID Patterns
|
|
- `AUTHID CURRENT_USER` - Functions execute with caller's privileges (FILE_MANAGER)
|
|
- `AUTHID DEFINER` - Objects created in package owner schema (FILE_MANAGER_ODS)
|
|
|
|
### Date Format Configuration System
|
|
|
|
**Three-Tier Hierarchical Date Format Resolution** (Confirmed through GET_DATE_FORMAT function analysis):
|
|
|
|
1. **Column-Specific Configuration** - First priority lookup in `A_COLUMN_DATE_FORMAT` table:
|
|
```sql
|
|
SELECT DATE_FORMAT FROM A_COLUMN_DATE_FORMAT
|
|
WHERE TEMPLATE_TABLE_NAME = 'CT_ET_TEMPLATES.TABLE_NAME'
|
|
AND COLUMN_NAME = 'SPECIFIC_COLUMN'
|
|
```
|
|
|
|
2. **DEFAULT Column Fallback** - Second priority using special DEFAULT entry:
|
|
```sql
|
|
SELECT DATE_FORMAT FROM A_COLUMN_DATE_FORMAT
|
|
WHERE TEMPLATE_TABLE_NAME = 'CT_ET_TEMPLATES.TABLE_NAME'
|
|
AND COLUMN_NAME = 'DEFAULT'
|
|
```
|
|
|
|
3. **Global Fallback** - Final fallback to ENV_MANAGER global variable:
|
|
```sql
|
|
-- Falls back to ENV_MANAGER.gvDefaultDateFormat ('DD/MM/YYYY HH24:MI:SS')
|
|
-- Configured via A_FILE_MANAGER_CONFIG table with CONFIG_VARIABLE = 'DefaultDateFormat'
|
|
```
|
|
|
|
**Configuration Examples:**
|
|
```sql
|
|
-- Column-specific date format
|
|
CALL FILE_MANAGER.ADD_COLUMN_DATE_FORMAT(
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
|
|
pColumnName => 'SNAPSHOT_DATE',
|
|
pDateFormat => 'YYYY-MM-DD HH24:MI:SS'
|
|
);
|
|
|
|
-- DEFAULT fallback for entire template table
|
|
CALL FILE_MANAGER.ADD_COLUMN_DATE_FORMAT(
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
|
|
pColumnName => 'DEFAULT',
|
|
pDateFormat => 'MM/DD/YYYY'
|
|
);
|
|
|
|
-- Global configuration (via A_FILE_MANAGER_CONFIG)
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('dev', 'DefaultDateFormat', 'DD-MON-YYYY HH24:MI:SS');
|
|
```
|
|
|
|
### Date Filtering Integration
|
|
Export procedures integrate with `CT_ODS.A_LOAD_HISTORY` for temporal partitioning:
|
|
```sql
|
|
-- Date-based partitioning query pattern
|
|
SELECT DISTINCT TO_CHAR(L.LOAD_START,'YYYY') AS YR, TO_CHAR(L.LOAD_START,'MM') AS MN
|
|
FROM target_table T, CT_ODS.A_LOAD_HISTORY L
|
|
WHERE T.A_WORKFLOW_HISTORY_KEY = L.A_WORKFLOW_HISTORY_KEY
|
|
AND L.LOAD_START >= :pMinDate AND L.LOAD_START < :pMaxDate
|
|
```
|
|
|
|
### Hive-Style Partitioning Support
|
|
**CONFIRMED**: DBMS_CLOUD.EXPORT_DATA **FULLY SUPPORTS** Hive-style partitioning with key=value format:
|
|
```
|
|
folder/PARTITION_YEAR=2025/PARTITION_MONTH=09/file.parquet
|
|
```
|
|
Export creates directory structure compatible with big data tools (Spark, Hive, etc.).
|
|
|
|
### Database Triggers
|
|
**A_SOURCE_FILE_CONFIG_KEY Auto-Generation**:
|
|
- `TRG_BI_A_SOURCE_FILE_CONFIG_CHECK` trigger automatically generates primary keys
|
|
- Uses `A_SOURCE_FILE_CONFIG_KEY_SEQ.NEXTVAL` sequence
|
|
- Validates CONTAINER relationships and business rules
|
|
|
|
### External Table Management
|
|
**Template Tables and External Tables Setup:**
|
|
- **Template Tables**: Located in `CT_ET_TEMPLATES` schema, define structure for external tables
|
|
- **CRITICAL**: Template tables can ONLY be created in `CT_ET_TEMPLATES` schema by ADMIN or CT_ET_TEMPLATES user
|
|
- **MRDS_LOADER Limitation**: Cannot create template tables - must use existing ones or request ADMIN to create them
|
|
- **External Table Creation**: Use `ODS.FILE_MANAGER_ODS.CREATE_EXTERNAL_TABLE` (recommended) or `CT_MRDS.FILE_MANAGER.CREATE_EXTERNAL_TABLE`
|
|
- **Storage Locations**: INBOX (incoming), ODS (operational), ARCHIVE (historical with partitioning)
|
|
- **Validation Features**: Automatic CSV column validation, excess column detection, detailed error reporting
|
|
|
|
**MRDS_LOADER External Table Creation Pattern:**
|
|
- **RECOMMENDED APPROACH**: Use MRDS_LOADER connection with `ODS.FILE_MANAGER_ODS` package to create external tables in ODS schema
|
|
- **Required Prerequisites**:
|
|
- Template table must exist in CT_ET_TEMPLATES (created by ADMIN)
|
|
- MRDS_LOADER must have comprehensive privileges to CT_MRDS schema objects
|
|
- ODS schema must have access to CT_MRDS.ENV_MANAGER for logging operations
|
|
- **Privilege Requirements for MRDS_LOADER**:
|
|
```sql
|
|
-- CT_MRDS schema access (essential for FILE_MANAGER_ODS operations)
|
|
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_PROCESS_LOG TO MRDS_LOADER;
|
|
GRANT EXECUTE ON CT_MRDS.ENV_MANAGER TO MRDS_LOADER;
|
|
GRANT EXECUTE ON ODS.FILE_MANAGER_ODS TO MRDS_LOADER;
|
|
```
|
|
- **Privilege Requirements for ODS Schema**:
|
|
```sql
|
|
-- Required for FILE_MANAGER_ODS to access logging and configuration
|
|
GRANT SELECT, INSERT, UPDATE, DELETE ON CT_MRDS.A_PROCESS_LOG TO ODS;
|
|
GRANT EXECUTE ON CT_MRDS.ENV_MANAGER TO ODS;
|
|
```
|
|
|
|
**Configuration Examples:**
|
|
```sql
|
|
-- Register source system
|
|
CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE(pSourceKey => 'C2D', pSourceName => 'Central Bank Data');
|
|
|
|
-- Configure file processing
|
|
CALL CT_MRDS.FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(
|
|
pSourceKey => 'C2D', pSourceFileType => 'INPUT', pSourceFileId => 'UC_DISSEM',
|
|
pSourceFileDesc => 'UC DISSEM Metadata', pSourceFileNamePattern => 'UC_NMA_DISSEM-*.csv',
|
|
pTableId => 'A_UC_DISSEM_METADATA_LOADS', pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS'
|
|
);
|
|
|
|
-- Configure date formats
|
|
CALL CT_MRDS.FILE_MANAGER.ADD_COLUMN_DATE_FORMAT(
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.C2D_A_UC_DISSEM_METADATA_LOADS',
|
|
pColumnName => 'SNAPSHOT_DATE', pDateFormat => 'YYYY-MM-DD HH24:MI:SS'
|
|
);
|
|
```
|
|
|
|
**Complete External Table Creation Example (STANDING_FACILITIES):**
|
|
```sql
|
|
-- Step 1: Create template table (as ADMIN)
|
|
CREATE TABLE CT_ET_TEMPLATES.LM_STANDING_FACILITIES (
|
|
A_KEY NUMBER NOT NULL,
|
|
REV_NUMBER NUMBER,
|
|
REF_DATE DATE,
|
|
FREE_TEXT VARCHAR2(1000),
|
|
MLF_BS_TOTAL NUMBER,
|
|
DF_BS_TOTAL NUMBER,
|
|
MLF_SF_TOTAL NUMBER,
|
|
DF_SF_TOTAL NUMBER
|
|
);
|
|
|
|
-- Step 2: Configure date format (as CT_MRDS or via MRDS_LOADER with privileges)
|
|
CALL CT_MRDS.FILE_MANAGER.ADD_COLUMN_DATE_FORMAT(
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES',
|
|
pColumnName => 'REF_DATE',
|
|
pDateFormat => 'DD/MM/YYYY'
|
|
);
|
|
|
|
-- Step 3: Create external tables using MRDS_LOADER connection
|
|
-- (Requires all prerequisite privileges for MRDS_LOADER and ODS schemas)
|
|
-- Following OFFICIAL PATH PATTERNS
|
|
|
|
-- INBOX table (CSV format for incoming files) - 3-level path
|
|
BEGIN
|
|
ODS.FILE_MANAGER_ODS.CREATE_EXTERNAL_TABLE(
|
|
pTableName => 'LM_STANDING_FACILITIES_INBOX',
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES',
|
|
pPrefix => 'INBOX/LM/STANDING_FACILITIES/STANDING_FACILITIES',
|
|
pBucketUri => CT_MRDS.ENV_MANAGER.gvInboxBucketUri
|
|
);
|
|
END;
|
|
/
|
|
|
|
-- ODS table (CSV format for operational data) - 2-level path
|
|
BEGIN
|
|
ODS.FILE_MANAGER_ODS.CREATE_EXTERNAL_TABLE(
|
|
pTableName => 'LM_STANDING_FACILITIES_ODS',
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES',
|
|
pPrefix => 'ODS/LM/STANDING_FACILITIES',
|
|
pBucketUri => CT_MRDS.ENV_MANAGER.gvDataBucketUri
|
|
);
|
|
END;
|
|
/
|
|
|
|
-- ARCHIVE table (Parquet format for historical data) - 2-level path
|
|
BEGIN
|
|
ODS.FILE_MANAGER_ODS.CREATE_EXTERNAL_TABLE(
|
|
pTableName => 'LM_STANDING_FACILITIES_ARCHIVE',
|
|
pTemplateTableName => 'CT_ET_TEMPLATES.LM_STANDING_FACILITIES',
|
|
pPrefix => 'ARCHIVE/LM/STANDING_FACILITIES',
|
|
pBucketUri => CT_MRDS.ENV_MANAGER.gvArchiveBucketUri
|
|
);
|
|
END;
|
|
/
|
|
|
|
-- Result: Three external tables created in ODS schema:
|
|
-- ODS.LM_STANDING_FACILITIES_INBOX (CSV)
|
|
-- ODS.LM_STANDING_FACILITIES_ODS (CSV)
|
|
-- ODS.LM_STANDING_FACILITIES_ARCHIVE (Parquet)
|
|
```
|
|
|
|
## Development Workflows
|
|
|
|
### Database Operations Approaches
|
|
|
|
#### **MCP Server Approach (Primary - Recommended)**
|
|
Use MCP SQLcl server for most database operations:
|
|
|
|
```sql
|
|
-- Connect to specific schema
|
|
mcp_sqlcl_connect("ADMIN@ggmichalski_high")
|
|
mcp_sqlcl_connect("CT_MRDS@ggmichalski_high")
|
|
|
|
-- Execute SQL directly
|
|
mcp_sqlcl_run-sql("SELECT USER FROM DUAL")
|
|
mcp_sqlcl_run-sql("CREATE OR REPLACE PACKAGE...")
|
|
|
|
-- List available connections
|
|
mcp_sqlcl_list-connections()
|
|
```
|
|
|
|
#### **Traditional SQLcl Approach (Secondary)**
|
|
Use direct SQLcl commands for complex scripts:
|
|
|
|
```powershell
|
|
# Single script execution pattern
|
|
Get-Content "path/to/script.sql" | sql "USERNAME/PASSWORD@SERVICE"
|
|
```
|
|
|
|
### Package Deployment
|
|
```powershell
|
|
# Core packages deployment sequence using source DDL files
|
|
Get-Content "MARS_Packages\mrds_elt-dev-database\mrds_elt-dev-database\database\CT_MRDS\packages\ENV_MANAGER.sql" | sql "CT_MRDS/Cloudpass#34@ggmichalski_high"
|
|
Get-Content "MARS_Packages\mrds_elt-dev-database\mrds_elt-dev-database\database\CT_MRDS\packages\FILE_MANAGER.sql" | sql "CT_MRDS/Cloudpass#34@ggmichalski_high"
|
|
Get-Content "MARS_Packages\mrds_elt-dev-database\mrds_elt-dev-database\database\ODS\packages\FILE_MANAGER_ODS.sql" | sql "ODS/Cloudpass#34@ggmichalski_high"
|
|
|
|
# MARS issue deployment using release packages
|
|
Get-Content "MARS_Packages\REL01\MARS-846\*.sql" | sql "CT_MRDS/Cloudpass#34@ggmichalski_high"
|
|
```
|
|
|
|
### Archive Creation
|
|
```powershell
|
|
# Create encrypted archive of core packages from source DDL
|
|
7z a -pMojeSuperHaslo -mhe=on FM_arch.7z MARS_Packages\mrds_elt-dev-database\mrds_elt-dev-database\database\CT_MRDS\packages\FILE_MANAGER.sql MARS_Packages\mrds_elt-dev-database\mrds_elt-dev-database\database\CT_MRDS\packages\FILE_ARCHIVER.sql MARS_Packages\mrds_elt-dev-database\mrds_elt-dev-database\database\CT_MRDS\packages\ENV_MANAGER.sql
|
|
|
|
# Create archive of specific MARS release
|
|
7z a -pMojeSuperHaslo -mhe=on MARS_846_arch.7z MARS_Packages\REL01\MARS-846\*.sql
|
|
```
|
|
|
|
### PowerShell Integration
|
|
Use the `OracleHelper` module for batch operations:
|
|
```powershell
|
|
Import-Module ".\powershell_modules\OracleHelper\OracleHelper.psm1" -Force
|
|
Invoke-OracleScript -ScriptPath "database\CT_MRDS\packages\FILE_MANAGER.sql" -User "CT_MRDS"
|
|
```
|
|
|
|
### Development File Structure Guidelines
|
|
|
|
**Source Code Organization:**
|
|
```
|
|
MARS_Packages/
|
|
├── mrds_elt-dev-database/
|
|
│ └── mrds_elt-dev-database/
|
|
│ └── database/ # SOURCE DDL FILES (baseline definitions)
|
|
│ ├── CT_MRDS/
|
|
│ │ ├── packages/
|
|
│ │ │ ├── ENV_MANAGER.sql # Environment management
|
|
│ │ │ ├── FILE_MANAGER.sql # Main file processing
|
|
│ │ │ ├── DATA_EXPORTER.sql # Data export operations
|
|
│ │ │ └── FILE_ARCHIVER.sql # Archival operations
|
|
│ │ ├── tables/
|
|
│ │ │ ├── A_SOURCE_FILE_CONFIG.sql # Source configuration (CREATE TABLE)
|
|
│ │ │ ├── A_PROCESS_LOG.sql # Process logging (CREATE TABLE)
|
|
│ │ │ └── A_FILE_MANAGER_CONFIG.sql # Environment config (CREATE TABLE)
|
|
│ │ ├── views/
|
|
│ │ │ └── V_SOURCE_FILE_STATUS.sql # View definitions (CREATE VIEW)
|
|
│ │ ├── indexes/
|
|
│ │ │ └── IDX_SOURCE_FILE_CONFIG.sql # Index definitions (CREATE INDEX)
|
|
│ │ └── triggers/
|
|
│ │ └── TRG_BI_A_SOURCE_FILE_CONFIG.sql
|
|
│ ├── ODS/
|
|
│ │ ├── FILE_MANAGER_ODS.pkb # DEFINER wrapper body
|
|
│ │ ├── FILE_MANAGER_ODS.pkg # DEFINER wrapper spec
|
|
│ │ ├── CREATE_USER.sql # User creation scripts
|
|
│ │ └── CREATE_ROLE.sql # Role creation scripts
|
|
│ └── CT_ET_TEMPLATES/
|
|
│ └── tables/
|
|
│ ├── CSDB_DEBT.sql # Template tables
|
|
│ ├── CSDB_DEBT_DAILY.sql
|
|
│ └── [other_templates].sql
|
|
├── REL01/ # RELEASE PACKAGES (deployment scripts)
|
|
│ ├── MARS-846/
|
|
│ │ ├── 01_MARS_846_install_*.sql
|
|
│ │ ├── 02_MARS_846_*.sql
|
|
│ │ ├── 91_MARS_846_rollback_*.sql
|
|
│ │ └── README.md
|
|
│ ├── MARS-1011/
|
|
│ └── [other_issues]/
|
|
├── REL02/ # NEXT RELEASE
|
|
│ ├── MARS-949/
|
|
│ ├── MARS-954/
|
|
│ └── [other_issues]/
|
|
└── migrations/ # MIGRATION SCRIPTS (for existing environments)
|
|
├── 001_add_characterset_columns.sql # ALTER TABLE scripts
|
|
├── 002_create_config_indexes.sql # New indexes
|
|
├── 003_update_default_values.sql # Data updates
|
|
└── rollback/
|
|
├── 001_rollback_characterset.sql # Rollback scripts
|
|
└── 002_rollback_indexes.sql
|
|
```
|
|
|
|
**Mandatory File Updates for Structural Changes:**
|
|
|
|
**1. Structure Definition Updates:**
|
|
- **New Tables**: Create complete `CREATE TABLE` in `MARS_Packages/mrds_elt-dev-database/mrds_elt-dev-database/database/{schema}/tables/{table_name}.sql`
|
|
- **Table Modifications**: Update existing table definition files with new structure
|
|
- **New Indexes**: Add `CREATE INDEX` statements in `database/{schema}/indexes/`
|
|
- **View Changes**: Update view definitions in `database/{schema}/views/`
|
|
|
|
**2. Migration Script Creation:**
|
|
- **Existing Environment Changes**: Create `ALTER TABLE` scripts in `MARS_Packages/migrations/`
|
|
- **Data Migration**: Prepare `INSERT/UPDATE` scripts for configuration changes
|
|
- **Rollback Scripts**: Create corresponding rollback scripts in `MARS_Packages/migrations/rollback/`
|
|
- **Deployment Order**: Number migration scripts sequentially (001_, 002_, etc.)
|
|
|
|
**3. Release Package Creation:**
|
|
- **MARS Issue Packages**: Create deployment scripts in `MARS_Packages/{RELEASE}/{MARS_ISSUE}/`
|
|
- **Installation Scripts**: Numbered install scripts (01_, 02_, etc.)
|
|
- **Rollback Scripts**: Numbered rollback scripts (91_, 92_, etc.)
|
|
- **Documentation**: Include README.md with deployment instructions
|
|
|
|
**4. Package Code Updates:**
|
|
- **Package Modifications**: Update .pks (spec) and .pkb (body) files in source `database/{schema}/packages/`
|
|
- **Wrapper Packages**: Update related wrapper packages if applicable
|
|
- **Documentation**: Update confluence/ directory with changes
|
|
|
|
**Change Management Process:**
|
|
```sql
|
|
-- 1. PREPARE STRUCTURE DEFINITIONS: Update baseline table definitions
|
|
-- File: MARS_Packages/mrds_elt-dev-database/mrds_elt-dev-database/database/CT_MRDS/tables/A_SOURCE_FILE_CONFIG.sql
|
|
CREATE TABLE A_SOURCE_FILE_CONFIG (
|
|
A_SOURCE_FILE_CONFIG_KEY NUMBER NOT NULL,
|
|
-- ... existing columns ...
|
|
CHARACTERSET VARCHAR2(50) DEFAULT 'WE8MSWIN1252', -- NEW COLUMN
|
|
DELIMITER VARCHAR2(10) DEFAULT ',', -- NEW COLUMN
|
|
CONSTRAINT PK_A_SOURCE_FILE_CONFIG PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY)
|
|
);
|
|
|
|
-- 2. PREPARE MIGRATION SCRIPT: For existing environments
|
|
-- File: MARS_Packages/migrations/001_add_characterset_columns.sql
|
|
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD (
|
|
CHARACTERSET VARCHAR2(50) DEFAULT 'WE8MSWIN1252',
|
|
DELIMITER VARCHAR2(10) DEFAULT ','
|
|
);
|
|
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.CHARACTERSET IS 'File encoding';
|
|
|
|
-- 3. CREATE RELEASE PACKAGE: For MARS issue deployment
|
|
-- File: MARS_Packages/REL02/MARS-XXXX/01_MARS_XXXX_add_characterset_columns.sql
|
|
-- (Copy migration script content with proper header and logging)
|
|
|
|
-- 4. PREPARE ROLLBACK SCRIPT: For emergency rollback
|
|
-- File: MARS_Packages/REL02/MARS-XXXX/91_MARS_XXXX_rollback_characterset.sql
|
|
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP (CHARACTERSET, DELIMITER);
|
|
|
|
-- 5. DEPLOY: Execute MARS package scripts
|
|
Get-Content "MARS_Packages/REL02/MARS-XXXX/01_MARS_XXXX_add_characterset_columns.sql" | sql "CT_MRDS/password@service"
|
|
Get-Content "MARS_Packages/REL02/MARS-XXXX/02_MARS_XXXX_update_packages.sql" | sql "CT_MRDS/password@service"
|
|
```
|
|
|
|
### Cloud Storage Integration
|
|
|
|
#### Available Object Storage Buckets
|
|
- **obucket** - Primary bucket for data operations (contains test files and parquet exports)
|
|
- **data** - Data storage bucket
|
|
- **archive** - Archived data bucket
|
|
- **inbox** - Incoming data bucket
|
|
|
|
#### OCI CLI Operations
|
|
```powershell
|
|
# Object Storage operations
|
|
oci os object put --bucket-name obucket --file "data.csv" --name "import/data.csv"
|
|
oci os object get --bucket-name obucket --name "export/results.parquet" --file "results.parquet"
|
|
oci os object list --bucket-name obucket --prefix "folder/"
|
|
|
|
# Namespace: frtgjxu7zl7c
|
|
# Region: eu-frankfurt-1 (Frankfurt)
|
|
```
|
|
|
|
#### Credential Management Pattern
|
|
Cloud operations use `DEF_CRED_ARN` credential pointing to Oracle Cloud Storage bucket: `https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frtgjxu7zl7c/b/obucket/`
|
|
|
|
## File Organization Conventions
|
|
|
|
### Storage Hierarchy
|
|
```
|
|
OCI Bucket Structure (OFFICIAL PATTERNS):
|
|
INBOX/{SOURCE}/{SOURCE_FILE_ID}/{TABLE_NAME}/ -- 3-level (incoming validation)
|
|
ODS/{SOURCE}/{TABLE_NAME}/ -- 2-level (operational data)
|
|
ARCHIVE/{SOURCE}/{TABLE_NAME}/PARTITION_YEAR={YYYY}/PARTITION_MONTH={MM}/ -- 2-level + partitioning
|
|
```
|
|
|
|
### Documentation Location
|
|
- Technical guides → `confluence/` directory
|
|
- Package documentation → Auto-generated to `confluence/`
|
|
- Setup procedures → `confluence/Tables_setup.md`
|
|
|
|
#### Documentation Generation Workflow
|
|
```powershell
|
|
# Generate package documentation using GET_PACKAGE_DOCUMENTATION function
|
|
echo "SET PAGESIZE 0`nSET LINESIZE 32000`nSET LONG 1000000`nSELECT GET_PACKAGE_DOCUMENTATION('PACKAGE_NAME', 'SCHEMA_NAME') FROM DUAL;" | sql "CT_MRDS/Cloudpass#34@ggmichalski_high" > confluence/package_name_documentation.md
|
|
```
|
|
|
|
**GET_PACKAGE_DOCUMENTATION Function:**
|
|
- **Purpose**: Automatically generates comprehensive markdown documentation from Oracle packages
|
|
- **Features**: Extracts procedural metadata, function signatures, parameter information, embedded comments
|
|
- **Usage**: `GET_PACKAGE_DOCUMENTATION('FILE_MANAGER', 'CT_MRDS')` returns CLOB with formatted markdown
|
|
- **Output**: Complete package documentation including usage examples and return types
|
|
|
|
## Testing and Validation
|
|
|
|
### SQL Package Testing Infrastructure
|
|
|
|
**Standard Package Testing Pattern with @install:**
|
|
```sql
|
|
-- Master install script pattern (executes sub-scripts using @)
|
|
-- Example: install_mars1049.sql
|
|
@@01_MARS_1049_install_CT_MRDS_ADD_ENCODING_COLUMN.sql
|
|
@@02_MARS_1049_install_CT_MRDS_FILE_MANAGER_SPEC.sql
|
|
@@03_MARS_1049_install_CT_MRDS_FILE_MANAGER_BODY.sql
|
|
@@04_MARS_1049_install_ODS_FILE_MANAGER_ODS_SPEC.sql
|
|
@@05_MARS_1049_install_ODS_FILE_MANAGER_ODS_BODY.sql
|
|
@@06_MARS_1049_verify_encoding_functionality.sql
|
|
@@07_MARS_1049_track_version.sql
|
|
|
|
-- Usage with PowerShell/SQLcl:
|
|
sql 'CT_MRDS/Cloudpass#34@ggmichalski_high' '@install_mars1049.sql'
|
|
```
|
|
|
|
**Package Verification Standards:**
|
|
```sql
|
|
-- Standard verification script pattern (XX_verify_*.sql)
|
|
-- Check compilation status
|
|
SELECT object_name, object_type, status
|
|
FROM user_objects
|
|
WHERE object_name = 'PACKAGE_NAME'
|
|
AND object_type IN ('PACKAGE', 'PACKAGE BODY');
|
|
|
|
-- Verify package version
|
|
SELECT SCHEMA.PACKAGE_NAME.GET_VERSION() AS VERSION FROM DUAL;
|
|
|
|
-- Check build information
|
|
SELECT SCHEMA.PACKAGE_NAME.GET_BUILD_INFO() AS BUILD_INFO FROM DUAL;
|
|
|
|
-- Verify functionality (package-specific tests)
|
|
-- Example: Test new encoding functionality
|
|
CALL FILE_MANAGER.ADD_SOURCE_FILE_CONFIG(..., pEncoding => 'WE8MSWIN1252');
|
|
```
|
|
|
|
**Testing Workflow Integration:**
|
|
- **Phase 1**: Structure Changes (`01_*_install_*_TABLE.sql`)
|
|
- **Phase 2**: Package Specification (`02_*_install_*_SPEC.sql`)
|
|
- **Phase 3**: Package Body (`03_*_install_*_BODY.sql`)
|
|
- **Phase 4**: Verification (`04_*_verify_*.sql`)
|
|
- **Phase 5**: Version Tracking (`05_*_track_*.sql`)
|
|
|
|
|
|
|
|
### Key Test Files
|
|
- `MARS_Packages/REL*/MARS-*/install_mars*.sql` - Master installation scripts with @include pattern
|
|
- `MARS_Packages/REL*/MARS-*/rollback_mars*.sql` - Complete rollback testing
|
|
|
|
|
|
### Common Operations
|
|
```sql
|
|
-- Test bucket area validation
|
|
SELECT GET_BUCKET_URI('DATA') FROM DUAL; -- Returns: https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frtgjxu7zl7c/b/data/o/
|
|
|
|
-- Validate export procedures with Hive-style partitioning
|
|
BEGIN
|
|
DATA_EXPORTER.EXPORT_TABLE_DATA_BY_DATE(
|
|
pSchemaName => 'OU_TOP',
|
|
pTableName => 'AGGREGATED_ALLOTMENT',
|
|
pKeyColumnName => 'A_WORKFLOW_HISTORY_KEY',
|
|
pBucketArea => 'DATA',
|
|
pFolderName => 'test_export',
|
|
pMinDate => DATE '2025-09-01',
|
|
pMaxDate => DATE '2025-09-30'
|
|
);
|
|
END;
|
|
-- Creates files in: test_export/PARTITION_YEAR=2025/PARTITION_MONTH=09/*.parquet
|
|
|
|
-- List cloud storage files using cloud_wrapper
|
|
SELECT object_name FROM TABLE(MRDS_LOADER.cloud_wrapper.list_objects(
|
|
credential_name => 'DEF_CRED_ARN',
|
|
location_uri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frtgjxu7zl7c/b/data/'
|
|
)) WHERE object_name LIKE 'test_export%';
|
|
```
|
|
|
|
## Connection Management
|
|
|
|
### User Credentials
|
|
- **Password**: All users use `Cloudpass#34`
|
|
- **Service**: All connect via `ggmichalski_high` service
|
|
- **Available Users**: ADMIN, MRDS_LOADER, MICHALZ, CT_MRDS, CT_ODS, ODS, CT_ET_TEMPLATES, OU_TOP
|
|
|
|
### SQLcl Connection Commands
|
|
```sql
|
|
-- Connect to database as specific user
|
|
connect ADMIN/Cloudpass#34@ggmichalski_high
|
|
connect CT_MRDS/Cloudpass#34@ggmichalski_high
|
|
connect MRDS_LOADER/Cloudpass#34@ggmichalski_high
|
|
|
|
-- Connection management
|
|
connmgr list -- List all saved connections
|
|
connmgr test ODS@ggmichalski_high -- Test specific connection
|
|
```
|
|
|
|
## Critical Implementation Patterns
|
|
|
|
### MCP Server Integration
|
|
The system supports Model Context Protocol (MCP) servers for enhanced database operations:
|
|
|
|
```sql
|
|
-- Use MCP SQLcl tools for database operations
|
|
mcp_sqlcl_connect("CT_MRDS@ggmichalski_high")
|
|
mcp_sqlcl_run-sql("EXEC FILE_MANAGER.EXPORT_TABLE_DATA(...)")
|
|
mcp_pylance_mcp_s_pylanceRunCodeSnippet() -- For Python validation
|
|
```
|
|
|
|
### Configuration Management Pattern
|
|
Environment variables are centrally managed via `A_FILE_MANAGER_CONFIG` table.
|
|
|
|
**MANDATORY**: Update configuration table definition files before database changes.
|
|
|
|
```sql
|
|
-- UPDATE FILE: database/CT_MRDS/tables/A_FILE_MANAGER_CONFIG.sql
|
|
-- INCLUDE: Initial data setup with correct camelCase format
|
|
|
|
CREATE TABLE A_FILE_MANAGER_CONFIG (
|
|
ENVIRONMENT_ID VARCHAR2(50) NOT NULL,
|
|
CONFIG_VARIABLE VARCHAR2(100) NOT NULL,
|
|
CONFIG_VARIABLE_VALUE VARCHAR2(4000),
|
|
CONSTRAINT PK_A_FILE_MANAGER_CONFIG PRIMARY KEY (ENVIRONMENT_ID, CONFIG_VARIABLE)
|
|
);
|
|
|
|
-- Configuration variables loaded at package initialization
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('default', 'EnvironmentID', 'dev');
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('dev', 'Region', 'eu-frankfurt-1');
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('dev', 'NameSpace', 'frtgjxu7zl7c');
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('dev', 'InboxBucketName', 'inbox');
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('dev', 'DataBucketName', 'data');
|
|
INSERT INTO A_FILE_MANAGER_CONFIG (ENVIRONMENT_ID, CONFIG_VARIABLE, CONFIG_VARIABLE_VALUE)
|
|
VALUES ('dev', 'ArchiveBucketName', 'archive');
|
|
```
|
|
|
|
**CRITICAL**: Variable names are case-sensitive - use exact camelCase format as shown above.
|
|
|
|
### Parameter Logging Pattern
|
|
All procedures use standardized parameter logging:
|
|
|
|
```sql
|
|
-- Standard parameter formatting for logging
|
|
vParameters := ENV_MANAGER.FORMAT_PARAMETERS(SYS.ODCIVARCHAR2LIST(
|
|
'pSchemaName => '''||nvl(pSchemaName,'NULL')||'''',
|
|
'pTableName => '''||nvl(pTableName,'NULL')||''''
|
|
));
|
|
ENV_MANAGER.LOG_PROCESS_EVENT('Start','INFO', vParameters);
|
|
```
|
|
|
|
### Cloud Wrapper Integration
|
|
```sql
|
|
-- PIPELINED function pattern (for list_objects)
|
|
FUNCTION list_objects(...) RETURN DBMS_CLOUD_TYPES.LIST_OBJECT_RET_TAB PIPELINED IS
|
|
BEGIN
|
|
FOR rec IN (SELECT * FROM TABLE(DBMS_CLOUD.LIST_OBJECTS(...))) LOOP
|
|
PIPE ROW(rec);
|
|
END LOOP;
|
|
EXCEPTION
|
|
WHEN OTHERS THEN
|
|
RAISE_APPLICATION_ERROR(-20005, 'Error listing objects: ' || SQLERRM);
|
|
END;
|
|
```
|
|
|
|
### Resource Principal Configuration
|
|
```sql
|
|
-- Enable Resource Principal (execute as ADMIN)
|
|
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(username => 'CT_MRDS');
|
|
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(username => 'MRDS_LOADER');
|
|
|
|
-- Test Resource Principal access
|
|
SELECT object_name FROM DBMS_CLOUD.LIST_OBJECTS(
|
|
credential_name => 'OCI$RESOURCE_PRINCIPAL',
|
|
location_uri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frtgjxu7zl7c/b/data/'
|
|
) WHERE ROWNUM <= 5;
|
|
```
|
|
|
|
## Debugging and Troubleshooting
|
|
|
|
### Package Compilation Validation
|
|
```sql
|
|
-- Check for compilation errors after changes
|
|
SELECT * FROM USER_ERRORS WHERE TYPE = 'PACKAGE' AND NAME = 'FILE_MANAGER';
|
|
```
|
|
|
|
### MCP Pylance Integration
|
|
For Python-related Oracle operations, use Pylance MCP tools:
|
|
|
|
```sql
|
|
-- Validate Python code syntax before database integration
|
|
mcp_pylance_mcp_s_pylanceSyntaxErrors(code, pythonVersion)
|
|
-- Check workspace Python environment
|
|
mcp_pylance_mcp_s_pylancePythonEnvironments(workspaceRoot)
|
|
```
|
|
|
|
### Archive and Backup Patterns
|
|
```powershell
|
|
# Standard backup command for all core packages
|
|
7z a -pMojeSuperHaslo -mhe=on FM_arch.7z database\CT_MRDS\packages\FILE_MANAGER.sql database\CT_MRDS\packages\FILE_ARCHIVER.sql database\CT_MRDS\packages\ENV_MANAGER.sql database\ODS\packages\FILE_MANAGER_ODS.sql
|
|
```
|
|
|
|
## Key Development Principles
|
|
|
|
When working with this system:
|
|
|
|
### **PRIMARY PRINCIPLE: CODE-FIRST DEVELOPMENT**
|
|
1. **ALWAYS update source files BEFORE database changes** - maintain version control integrity
|
|
2. **Prepare complete file modifications** before any database deployment
|
|
3. **Validate changes in files** before executing on database
|
|
4. **Document structural changes** in corresponding .sql files
|
|
|
|
### **SYSTEM-SPECIFIC PRINCIPLES**
|
|
5. **Always use MSG_* constants** for error messages to prevent session-level duplication
|
|
6. **Follow the privilege separation model** - MRDS_LOADER owns cloud operations, others get EXECUTE grants
|
|
7. **Use standardized parameter logging** with FORMAT_PARAMETERS for all procedures
|
|
8. **Test bucket area mappings** before deploying export procedures
|
|
9. **Leverage MCP tools** for database operations and Python validation when available
|
|
6. **Hive-style partitioning is fully supported** - DBMS_CLOUD.EXPORT_DATA creates PARTITION_YEAR=/PARTITION_MONTH= structure
|
|
7. **Use triggers for auto-key generation** - TRG_BI_A_SOURCE_FILE_CONFIG_CHECK handles A_SOURCE_FILE_CONFIG_KEY
|
|
8. **Follow configuration sequence** - ADD_SOURCE → ADD_SOURCE_FILE_CONFIG → ADD_COLUMN_DATE_FORMAT
|
|
9. **Use ODS.FILE_MANAGER_ODS package** for external table creation (AUTHID DEFINER) over CT_MRDS.FILE_MANAGER (AUTHID CURRENT_USER)
|
|
10. **Implement comprehensive validation** - Use PROCESS_SOURCE_FILE for complete 6-step file validation workflow
|
|
11. **Utilize ANALYZE_VALIDATION_ERRORS** for detailed file validation error analysis and troubleshooting
|
|
12. **Safe configuration removal** - Use DELETE_SOURCE_CASCADE for complete source system cleanup with referential integrity
|
|
13. **Template table restriction** - Template tables can ONLY be created in CT_ET_TEMPLATES schema by ADMIN or CT_ET_TEMPLATES user, not by MRDS_LOADER
|
|
14. **MRDS_LOADER external table creation** - Use MRDS_LOADER connection with ODS.FILE_MANAGER_ODS package for creating external tables in ODS schema (requires proper privilege setup for both MRDS_LOADER and ODS schemas)
|
|
15. **Cross-schema privilege requirements** - When using FILE_MANAGER_ODS package, ensure both MRDS_LOADER and ODS have sufficient privileges to CT_MRDS schema objects (especially A_PROCESS_LOG and ENV_MANAGER)
|
|
16. **MANDATORY PATH PATTERNS** - Always use official patterns: INBOX (3-level), ODS (2-level), ARCHIVE (2-level) - no simplified versions allowed
|
|
17. **Path pattern compliance** - INBOX requires SOURCE_FILE_ID, while ODS and ARCHIVE omit it for simplified access
|
|
|
|
### **Common File Modification Scenarios**
|
|
|
|
**Adding New Columns to Configuration Tables:**
|
|
```sql
|
|
-- 1. UPDATE STRUCTURE DEFINITION: database/CT_MRDS/tables/A_SOURCE_FILE_CONFIG.sql
|
|
-- Modify the complete CREATE TABLE statement to include new columns
|
|
CREATE TABLE A_SOURCE_FILE_CONFIG (
|
|
A_SOURCE_FILE_CONFIG_KEY NUMBER NOT NULL,
|
|
-- ... existing columns ...
|
|
NEW_COLUMN VARCHAR2(50) DEFAULT 'default_value', -- ADD THIS LINE
|
|
CONSTRAINT PK_A_SOURCE_FILE_CONFIG PRIMARY KEY (A_SOURCE_FILE_CONFIG_KEY)
|
|
);
|
|
COMMENT ON COLUMN A_SOURCE_FILE_CONFIG.NEW_COLUMN IS 'Description';
|
|
|
|
-- 2. CREATE MIGRATION SCRIPT: database/migrations/XXX_add_new_column.sql
|
|
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG ADD (
|
|
NEW_COLUMN VARCHAR2(50) DEFAULT 'default_value'
|
|
);
|
|
COMMENT ON COLUMN CT_MRDS.A_SOURCE_FILE_CONFIG.NEW_COLUMN IS 'Description';
|
|
|
|
-- 3. CREATE ROLLBACK SCRIPT: database/migrations/rollback/XXX_rollback_new_column.sql
|
|
ALTER TABLE CT_MRDS.A_SOURCE_FILE_CONFIG DROP COLUMN NEW_COLUMN;
|
|
|
|
-- 4. UPDATE PACKAGE CODE: database/CT_MRDS/packages/FILE_MANAGER.sql
|
|
-- Add logic to utilize new configuration column
|
|
|
|
-- 5. DEPLOY: Execute migration first, then updated packages
|
|
```
|
|
|
|
**Modifying Package Procedures:**
|
|
```sql
|
|
-- 1. UPDATE PACKAGE DEFINITION: database/CT_MRDS/packages/FILE_MANAGER.sql
|
|
-- Modify both .pks (specification) and .pkb (body) sections in the same file
|
|
|
|
-- 2. UPDATE WRAPPER PACKAGES: database/ODS/packages/FILE_MANAGER_ODS.sql
|
|
-- Update related wrapper packages if applicable
|
|
|
|
-- 3. DEPLOY: Execute updated package files
|
|
Get-Content "database/CT_MRDS/packages/FILE_MANAGER.sql" | sql "CT_MRDS/password@service"
|
|
Get-Content "database/ODS/packages/FILE_MANAGER_ODS.sql" | sql "ODS/password@service"
|
|
|
|
-- 4. UPDATE DOCUMENTATION: confluence/ directory
|
|
```
|
|
|
|
**Adding New Template Tables:**
|
|
```sql
|
|
-- 1. CREATE STRUCTURE DEFINITION: database/CT_ET_TEMPLATES/tables/NEW_TEMPLATE.sql
|
|
CREATE TABLE CT_ET_TEMPLATES.NEW_TEMPLATE (
|
|
-- Column definitions matching expected CSV structure
|
|
COL1 VARCHAR2(50),
|
|
COL2 NUMBER,
|
|
COL3 DATE
|
|
);
|
|
|
|
-- 2. CREATE MIGRATION SCRIPT: database/migrations/XXX_create_new_template.sql
|
|
-- (Same CREATE TABLE statement for deployment)
|
|
|
|
-- 3. UPDATE CONFIGURATION: Add new source file config entries
|
|
-- 4. UPDATE EXTERNAL TABLE SCRIPTS: If needed for this template
|
|
```
|
|
|
|
## Code Style Guidelines
|
|
|
|
**Text Formatting**:
|
|
- **NO EMOJI/ICONS**: Do not use emoji, Unicode symbols, or decorative icons in code, comments, documentation, or console output
|
|
- Use plain text with clear, professional formatting
|
|
- Prefer descriptive text labels over visual symbols
|
|
- Example: Use "ERROR:" instead of "🚨", "LOCATION:" instead of "📍"
|
|
|
|
**Line Break Formatting**:
|
|
- **Use cgBL constant**: Always use the predefined constant `cgBL CONSTANT VARCHAR2(2) := CHR(13)||CHR(10);` for line breaks in error messages and logging
|
|
- Replace `CHR(10)` with `cgBL` for consistent CRLF formatting across Windows environments
|
|
- Exception: Keep `CHR(10)` in REGEXP_COUNT functions where it's required for line counting operations
|
|
|
|
**Professional Output**:
|
|
- Keep all logging, error messages, and documentation strictly text-based
|
|
- Use standard ASCII characters and proper formatting
|
|
- Maintain enterprise-grade professional appearance
|
|
|
|
**Oracle DDL File Standards**:
|
|
- **MANDATORY**: All Oracle DDL files (.pkg, .pkb, .sql for packages/triggers) must end with forward slash `/`
|
|
- **Placement**: Add `/` after final `END;` statement with blank line separation
|
|
- **Example**: `END;` followed by blank line, then `/` on separate line
|
|
- **Scope**: Applies to all packages (specification and body), triggers, and PL/SQL blocks
|
|
- **Purpose**: Ensures proper Oracle SQL*Plus and SQLcl termination of PL/SQL blocks
|
|
|
|
|
|
|
|
The system is designed for enterprise-scale data processing with proper logging, error management, and OCI integration. |