Use AI to intelligently migrate schemas between different database systems
ChartDB’s AI-powered migration feature uses large language models to intelligently convert database schemas between different SQL dialects, handling complex type conversions and dialect-specific features that traditional converters struggle with.
AI migration leverages OpenAI’s language models to provide context-aware database schema translation, going beyond simple type mapping to understand the semantic meaning of your schema and generate optimized DDL for the target database.
AI migration is available as an enhanced alternative to ChartDB’s built-in deterministic cross-dialect conversion. Use it when migrating between databases that don’t have deterministic conversion support, or when you need more sophisticated transformations.
-- PostgreSQL with array types and JSONBCREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) NOT NULL UNIQUE, tags TEXT[], settings JSONB, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);-- GIN index for array searchCREATE INDEX idx_users_tags ON users USING GIN(tags);-- PostgreSQL-specific regex checkALTER TABLE users ADD CONSTRAINT check_email CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}$');
-- Oracle with sequences and specific typesCREATE SEQUENCE emp_seq START WITH 1000 INCREMENT BY 1;CREATE TABLE employees ( employee_id NUMBER PRIMARY KEY, email VARCHAR2(255) NOT NULL, hire_date DATE DEFAULT SYSDATE, salary NUMBER(10,2) CHECK (salary > 0), department_id NUMBER, CONSTRAINT fk_dept FOREIGN KEY (department_id) REFERENCES departments(department_id));CREATE INDEX idx_emp_email ON employees(email);
-- AI recognizes this is a timestamp field-- and suggests appropriate defaultsSource: created_date DATETarget: created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
-- AI infers proper cascade behaviorSource: Basic FK without cascadeTarget: FK with ON DELETE CASCADE (for child records)Target: FK with ON DELETE SET NULL (for optional references)
-- AI adds appropriate indexesSource: Large text fieldTarget: + Full-text search indexSource: Foreign key columnTarget: + Index for FK lookups
-- AI chooses optimal typesSource: NVARCHAR(MAX)Target: TEXT (if storing large content)Target: VARCHAR(n) (if predictable size)
Complex database features are intelligently translated:
Partitioning
-- PostgreSQL range partitioningCREATE TABLE measurements ( id SERIAL, measured_at TIMESTAMP, value NUMERIC) PARTITION BY RANGE (measured_at);-- AI converts to MySQL partitioning syntaxCREATE TABLE measurements ( id INT AUTO_INCREMENT, measured_at TIMESTAMP, value DECIMAL(10,2), PRIMARY KEY (id, measured_at)) PARTITION BY RANGE (YEAR(measured_at)) ( PARTITION p2023 VALUES LESS THAN (2024), PARTITION p2024 VALUES LESS THAN (2025));
Generated Columns
-- PostgreSQL generated columnCREATE TABLE products ( price NUMERIC, tax_rate NUMERIC, total NUMERIC GENERATED ALWAYS AS (price * (1 + tax_rate)) STORED);-- AI converts to MySQL generated columnCREATE TABLE products ( price DECIMAL(10,2), tax_rate DECIMAL(4,3), total DECIMAL(10,2) AS (price * (1 + tax_rate)) STORED);
Window Functions in Views
-- Complex view with window functions-- AI ensures syntax compatibility across databasesCREATE VIEW ranked_products ASSELECT product_id, name, price, ROW_NUMBER() OVER (PARTITION BY category ORDER BY price DESC) as rankFROM products;
Triggers
-- PostgreSQL triggerCREATE OR REPLACE FUNCTION update_modified_column()RETURNS TRIGGER AS $$BEGIN NEW.modified_at = NOW(); RETURN NEW;END;$$ LANGUAGE plpgsql;-- AI converts to MySQL triggerDELIMITER $$CREATE TRIGGER update_modified_columnBEFORE UPDATE ON table_nameFOR EACH ROWBEGIN SET NEW.modified_at = NOW();END$$DELIMITER ;
ChartDB uses carefully crafted prompts for optimal results:
const generateMigrationPrompt = ( sourceSQL: string, sourceDB: DatabaseType, targetDB: DatabaseType): string => { return `You are a database migration expert. Convert this ${sourceDB} schema to ${targetDB}.Requirements:1. Preserve all data integrity constraints2. Use native ${targetDB} features where appropriate3. Optimize for ${targetDB} performance best practices4. Include migration notes for any non-trivial conversions5. Add comments explaining type conversions6. Ensure referential integrity is maintainedSource SQL (${sourceDB}):${sourceSQL}Generate optimized ${targetDB} DDL with:- Type conversions explained- Index strategy for ${targetDB}- Any ${targetDB}-specific optimizations- Migration warnings if applicableReturn only valid ${targetDB} SQL DDL.`;};
AI migration can fail due to API limits, token limits, or invalid responses. ChartDB implements automatic retry with exponential backoff and falls back to deterministic conversion when available.
const migrateWithRetry = async ( sourceSQL: string, targetDB: DatabaseType, maxRetries: number = 3): Promise<string> => { for (let attempt = 1; attempt <= maxRetries; attempt++) { try { const result = await callOpenAI(sourceSQL, targetDB); return result.sql; } catch (error) { if (attempt === maxRetries) { // Fall back to deterministic conversion if available if (hasCrossDialectSupport(sourceDB, targetDB)) { return deterministicConversion(sourceSQL, targetDB); } throw new Error('AI migration failed after all retries'); } // Exponential backoff await sleep(Math.pow(2, attempt) * 1000); } }};
Use deterministic conversion when available for production migrations. Use AI migration for unsupported paths or when you need additional optimization suggestions.
Your schema is sent to OpenAI’s API. Do not include sensitive data, credentials, or proprietary information in table names, column names, or comments during AI migration.
Security best practices:
Sanitize schema before migration
Use environment variables for API keys
Review OpenAI’s data usage policies
Consider self-hosted LLM alternatives for sensitive schemas