I have been doing Oracle-to-PostgreSQL migrations for over last decades across enterprises, cloud platforms, and everything in between. I have used commercial tools, cloud-native services, and custom scripts. And when it comes to table DDL conversion, I keep coming back to the same tool: Ora2pg. Not because it is the flashiest or the easiest to set up, but because once you understand its configuration model, nothing else comes close to the control it gives you. This post is my attempt to convince you of the same and to walk you through the specific features that I use on every single engagement.
Let me be direct if you are doing an Oracle-to-PostgreSQL migration and you have not tried Ora2pg for your DDL conversion, you are making your life harder than it needs to be. This is a 20+ year battle-tested open-source project that has seen more Oracle schemas than most of us ever will.
This post is specifically about Table DDL conversion and the transformation knobs Ora2pg gives you. Not PL/SQL. Not data export. Just the DDL side because that alone deserves a dedicated conversation.

What Ora2Pg Actually Is (Quick Primer)
Ora2Pg connects to your Oracle database, scans it automatically, extracts its structure or data or code, and generates PostgreSQL-compatible SQL scripts. One config file ora2pg.conf controls everything. Set your DSN, set your schema, set the export type, and you are off.
ora2pg -t TABLE -c ora2pg.conf -o tables.sql
That one command gets you tables with constraints – primary keys, foreign keys, unique constraints, check constraints all translated and ready to load. It handles sequences, indexes, partitions, grants, views, the works. But let us focus on what makes it genuinely powerful for DDL work: the transformation section.
The Transformation Section: Where the Real Power Lives
The ora2pg.conf transformation directives are what separate Ora2pg from a simple schema dumper. You are not just getting a mechanical type mapping, you are getting a configuration-driven transformation engine.
MODIFY_TYPE — Override Column Types on Your Terms
Oracle schemas are full of NUMBER(1) columns that are logically boolean but typed as numbers because Oracle did not have a native boolean until very recently. You know which ones they are, the application knows which ones they are, but a dumb type mapper does not.
MODIFY_TYPE lets you explicitly override the target type for specific table-column combinations:
MODIFY_TYPE ORDERS:IS_ACTIVE:booleanMODIFY_TYPE CUSTOMERS:STATUS_FLAG:smallint
This gives you surgical control. The column that truly represents a boolean flips to boolean. The one that happens to be NUMBER(1) but actually stores an enum value stays as smallint. No global assumptions, no surprises downstream.
REPLACE_AS_BOOLEAN + BOOLEAN_VALUES — Boolean Transformation at Scale
If you have a whole class of columns say, every NUMBER(1) in the schema that need to become boolean, doing them one by one in MODIFY_TYPE is tedious. That is where REPLACE_AS_BOOLEAN comes in:
REPLACE_AS_BOOLEAN NUMBER:1
This tells ora2pg to treat every NUMBER(1) as a candidate for boolean conversion. Combined with BOOLEAN_VALUES, which defines what the true/false pairs look like in your data:
BOOLEAN_VALUES yes:no y:n 1:0 true:false enabled:disabled
Ora2Pg ships with a sensible default set. You extend it for your domain-specific values. This is the kind of feature you only build after seeing hundreds of real Oracle schemas.
ORA_RESERVED_WORDS – Escaping Oracle-isms That Break PostgreSQL
Oracle has its own set of reserved words that are perfectly valid as column or table names in Oracle but will blow up in PostgreSQL. ORA_RESERVED_WORDS tells Ora2pg to double-quote those specific identifiers during export:
ORA_RESERVED_WORDS audit,comment,references
Real-world Oracle schemas use COMMENT, AUDIT, LEVEL, REFERENCES as column names all the time particularly in legacy financial and ERP systems. Without this directive, your generated DDL silently produces broken SQL. With it, Ora2pg wraps those identifiers and your PostgreSQL DDL loads cleanly.
I have seen this one issue alone cause post-migration failures that took a team two days to trace back to a column name. One directive. Two days saved.
PRESERVE_CASE – Case Sensitivity Control
By default, ora2pg lowercases everything which is the right call for PostgreSQL. But some shops have mixed-case Oracle schemas they need to preserve exactly as-is (usually because the application references identifiers with specific casing). Flip PRESERVE_CASE to 1 and ora2pg double-quotes all object names to preserve their original case.
Use this carefully. If you enable it, every SQL statement touching those objects in PostgreSQL will need double-quoted identifiers. But when you need it, you need it.
USE_RESERVED_WORDS – PostgreSQL Reserved Word Protection
The flip side of ORA_RESERVED_WORDS. If your Oracle table or column names happen to collide with PostgreSQL reserved words, USE_RESERVED_WORDS 1 tells Ora2pg to automatically quote them in the output DDL. One flag, no manual grep-and-replace exercise. But be cautious of identifiers that start with $, those will fail in PostgreSQL.
DATA_TYPE – Custom Type Mapping
The default Oracle-to-PostgreSQL type mapping in Ora2pg is solid, but every schema has quirks. DATA_TYPE lets you override the global type mapping rules:
DATA_TYPE NUMBER(1) booleanDATA_TYPE NUMBER(*) bigintDATA_TYPE VARCHAR2 varchar
This is coarser than MODIFY_TYPE (which is column-specific) but useful when you have organisation-wide conventions for example, “we always used NUMBER(1) for booleans across every schema.”
The discipline is knowing when to use DATA_TYPE (global convention) versus MODIFY_TYPE (surgical override). Apply DATA_TYPE too broadly and you will quietly corrupt edge cases. Apply MODIFY_TYPE too narrowly and you miss a class of columns. The right answer depends on the schema.
FORCE_IDENTITY_BIGINT – Enforce BIGINT for all Identity Columns.
Enable migrating all Identity column with BIGINT data type that serve as best practise as per PostgreSQL for long term stability.
The Testing Framework: This Is What Closes the Loop
Transformation is half the story. How do you know the conversion is correct? Ora2Pg ships a built-in testing framework that most people never fully use, and that is a mistake.
TEST – Checks that all objects exported from Oracle actually exist in PostgreSQL. Tables, indexes, constraints, sequences — it compares both sides and reports gaps.
TEST_VIEW – Validates that views have been created correctly on the PostgreSQL side.
TEST_COUNT – Compares row counts across all tables between Oracle and PostgreSQL. Fast sanity check after data load.
TEST_DATA – Runs a deeper comparison, sampling actual data values across tables to verify the content, not just the counts.
# Run full validationora2pg -t TEST -c ora2pg.confora2pg -t TEST_COUNT -c ora2pg.confora2pg -t TEST_DATA -c ora2pg.conf
You run the DDL export, load it, migrate your data, and then run TEST + TEST_COUNT + TEST_DATA as a pipeline. That is a migration validation loop built into the same tool you used to convert. No external tooling needed to get started.
The Assessment Report: Before You Touch a Line of DDL
Before any of the above, run SHOW_REPORT. It generates an HTML report that breaks down every Oracle object in your schema, estimates migration complexity (A/B/C tiers), calculates effort in person-days, and flags what needs manual rewriting.
ora2pg -t SHOW_REPORT --estimate_cost -c ora2pg.conf --dump_as_html > report.html
This is your pre-migration brief. It tells you what ora2pg can handle automatically versus what will need human intervention. On a real enterprise schema, this report saves weeks of scoping conversations.
Sample Glimpse of Ora2pg

Where Ora2pg Gets Tricky
Ora2Pg is excellent for table DDL, sequences, indexes, and constraints. Straightforward schemas even large ones come out clean with the right transformation flags set.
But real Oracle schemas are not always clean, and the complexity compounds at scale.
Configuration accuracy under scale.
A 50-table schema is manageable to tune by hand. A 500-table schema across 10 business schemas is not. The number of MODIFY_TYPE decisions, the boolean column candidates, the reserved word conflicts, the partition strategies it all multiplies. Miss a flag in one schema and you get DDL that loads without errors but behaves wrong. That is the hardest class of migration bug to catch.
Directive interaction. DATA_TYPE, MODIFY_TYPE, REPLACE_AS_BOOLEAN, and BOOLEAN_VALUES interact. Setting them in the wrong order or with conflicting scopes produces unexpected output. You need to understand how ora2pg resolves precedence before you write a production config.
Non-standard Oracle patterns.
Column defaults using Oracle sequences, function-based index definitions, interval partitions with custom rules, domain indexes these often need post-processing that ora2pg.conf alone cannot handle. Ora2Pg gets you 90–95% of the way. The remaining 5–10% is where migration engineers earn their keep.
None of this is a criticism of Ora2pg. These are the inherent complexities of Oracle-to-PostgreSQL migration at enterprise scale. Ora2Pg gives you the tools to handle all of it. The discipline is in knowing how to apply them.
One More Thing: DCGMigrator
I will keep this short because it is not the point of this post.
The challenge with Ora2pg at scale is not the tool it is the configuration. When you are dealing with 10, 20, 50 Oracle schemas or large set of tables, figuring out the right combination of MODIFY_TYPE overrides, boolean column candidates, reserved word conflicts, and type mapping rules for each schema is a repetitive analysis job. Miss something and you get DDL that loads clean but behaves wrong.
DCGMigrator – the tool I built at DataCloudGaze scans your Oracle database and generates a best-fit ora2pg.conf automatically. It does the discovery work so you do not have to. You still run ora2pg. You still own the output. You just start from a configuration grounded in what is actually in your schema, not guesswork.
Use ora2pg. Let DCGMigrator get your configuration right faster. Connect to Know More.
Bottom Line
Ora2Pg has been around since 2001. It is free, open-source, actively maintained, and has more Oracle-to-PostgreSQL institutional knowledge baked into it than any commercial tool I have used.
For table DDL conversion, there is no better starting point. Learn the transformation directives MODIFY_TYPE, REPLACE_AS_BOOLEAN, BOOLEAN_VALUES, ORA_RESERVED_WORDS, USE_RESERVED_WORDS, PRESERVE_CASE, DATA_TYPE — understand how they interact, get comfortable with the assessment and testing pipeline, and you will be converting Oracle schemas to PostgreSQL with real confidence.
The configuration is the craft. Everything else builds on top of that foundation.