episode_title,episode_subtitle,season_number,episode_number,series_information,publication_datetime,episode_duration,explicit_content,content_format,primary_category,episode_summary,show_notes,primary_cta_text,primary_cta_url,secondary_cta_text,secondary_cta_url,episode_slug,meta_description,publish_apple,publish_spotify,publish_youtube,include_newsletter,internal_status,internal_notes,assigned_editorFor these fields, you have multiple options:
Option 1: Semicolon-separated values (simple)
hosts,topics_keywords,related_episodes,merchandise_ideas
john_doe;jane_smith,machine learning;startups;productivity,episode_123;episode_456,T-shirt: Debugging is life;Sticker: Podcast logoOption 2: JSON strings (structured)
hosts_json,topics_keywords_json,related_episodes_json,merchandise_ideas_json
["john_doe","jane_smith"],["machine learning","startups"],["episode_123","episode_456"],["T-shirt: Debugging is life","Sticker: Podcast logo"]episode_title,episode_subtitle,season_number,episode_number,series_information,publication_datetime,episode_duration,explicit_content,content_format,primary_category,topics_keywords,episode_summary,show_notes,hosts,primary_cta_text,primary_cta_url,secondary_cta_text,secondary_cta_url,episode_slug,meta_description,publish_apple,publish_spotify,publish_youtube,include_newsletter,related_episodes,merchandise_ideas,internal_status,internal_notes,assigned_editor
"The Future of AI in Podcasting","Exploring how AI is transforming content creation",2,15,"Part 3 of 5: AI Tools Series","2025-12-10T10:00:00","01:15:30","no","deep-dive","Technology","AI;machine learning;content creation;automation","In this episode, we explore the latest AI tools that are revolutionizing podcast production...","<p>Timestamps:</p><p>00:00 - Introduction</p><p>10:30 - AI Transcription Tools</p>","john_doe;jane_smith","Get the AI Toolkit","https://example.com/ai-toolkit","Join our Discord","https://example.com/discord","future-of-ai-podcasting","Explore how AI tools are transforming podcast creation from transcription to distribution",TRUE,TRUE,FALSE,TRUE,"episode_14;episode_16","AI Podcast T-shirt;Neural Network Sticker","draft","Need to verify AI tool links before publishing","editor_jane"episodes.csv (Main Data):
episode_id,episode_title,episode_subtitle,season_number,episode_number,series_information,publication_datetime,episode_duration,explicit_content,content_format,primary_category,episode_summary,show_notes,primary_cta_text,primary_cta_url,secondary_cta_text,secondary_cta_url,episode_slug,meta_description,publish_apple,publish_spotify,publish_youtube,include_newsletter,internal_status,internal_notes,assigned_editor
EP001,"The Future of AI in Podcasting","Exploring how AI is transforming content creation",2,15,"Part 3 of 5: AI Tools Series","2025-12-10T10:00:00","01:15:30","no","deep-dive","Technology","In this episode...","<p>Detailed show notes...</p>","Get the AI Toolkit","https://example.com/ai-toolkit","Join our Discord","https://example.com/discord","future-of-ai-podcasting","Explore how AI tools are transforming...",TRUE,TRUE,FALSE,TRUE,"draft","Need to verify AI tool links","editor_jane"topics_keywords.csv (Many-to-Many):
episode_id,topic_keyword
EP001,AI
EP001,machine learning
EP001,content creation
EP001,automationhosts.csv (Many-to-Many):
episode_id,host_id
EP001,john_doe
EP001,jane_smithguests.csv (Related Data):
episode_id,guest_name,guest_title,guest_bio,guest_website
EP001,"Dr. Alex Chen","AI Research Lead at TechCorp","Dr. Chen specializes in natural language processing...","https://example.com/alex-chen"
EP001,"Sarah Miller","Podcast Producer","Sarah has produced over 500 episodes...","https://example.com/sarah-miller"chapters.csv (Related Data):
episode_id,chapter_timecode,chapter_title,chapter_description
EP001,"00:00:00","Introduction","Welcome to the episode"
EP001,"10:30:00","AI Transcription Tools","Comparing different AI transcription services"
EP001,"25:45:00","AI Content Generation","Using AI for show notes and summaries"
EP001,"45:20:00","Q&A Session","Answering audience questions"resources_links.csv (Related Data):
episode_id,link_text,link_url,link_description
EP001,"Otter.ai Transcription","https://otter.ai","AI-powered transcription service"
EP001,"Descript Audio Editing","https://descript.com","AI audio editing tool"
EP001,"ChatGPT for Content","https://chat.openai.com","AI content generation"Create a file named podcast_template.csv with the following content:
# PODCAST EPISODE CSV TEMPLATE
#
# INSTRUCTIONS:
# 1. Fill in one row per episode
# 2. Required fields are marked with [R]
# 3. Use semicolons (;) to separate multiple values in array fields
# 4. Dates must be in ISO format: YYYY-MM-DDTHH:MM:SS
# 5. For complex data (guests, chapters), use the separate CSV files
# 6. Boolean fields: Use TRUE/FALSE or 1/0
episode_title[R],episode_subtitle,season_number,episode_number,series_information,publication_datetime[R],episode_duration,explicit_content[R],content_format[R],primary_category[R],topics_keywords,episode_summary[R],show_notes,hosts[R],primary_cta_text,primary_cta_url,secondary_cta_text,secondary_cta_url,episode_slug,meta_description,publish_apple,publish_spotify,publish_youtube,include_newsletter,related_episodes,merchandise_ideas,internal_status,internal_notes,assigned_editor
# EXAMPLE DATA - Replace with your episode information:
"The Future of AI in Podcasting","Exploring how AI is transforming content creation",2,15,"Part 3 of 5: AI Tools Series","2025-12-10T10:00:00","01:15:30","no","deep-dive","Technology","AI;machine learning;content creation;automation","In this episode, we explore the latest AI tools that are revolutionizing podcast production. We discuss transcription services, content generation, and automated distribution.","<p>Timestamps:</p><p>00:00 - Introduction</p><p>10:30 - AI Transcription Tools</p><p>25:45 - AI Content Generation</p><p>45:20 - Q&A Session</p>","john_doe;jane_smith","Get the AI Toolkit","https://example.com/ai-toolkit","Join our Discord","https://example.com/discord","future-of-ai-podcasting","Explore how AI tools are transforming podcast creation from transcription to distribution",TRUE,TRUE,FALSE,TRUE,"episode_14;episode_16","AI Podcast T-shirt;Neural Network Sticker","draft","Need to verify AI tool links before publishing","editor_jane"
# ADD YOUR EPISODES BELOW:// CSV to Form Data Mapper
function mapCSVToFormData(csvRow) {
const formData = {
// Direct mappings
episode_title: csvRow.episode_title,
episode_subtitle: csvRow.episode_subtitle,
season_number: parseInt(csvRow.season_number) || null,
episode_number: parseInt(csvRow.episode_number) || null,
series_information: csvRow.series_information,
publication_datetime: csvRow.publication_datetime,
episode_duration: csvRow.episode_duration,
explicit_content: csvRow.explicit_content,
content_format: csvRow.content_format,
primary_category: csvRow.primary_category,
episode_summary: csvRow.episode_summary,
show_notes: csvRow.show_notes,
primary_cta_text: csvRow.primary_cta_text,
primary_cta_url: csvRow.primary_cta_url,
secondary_cta_text: csvRow.secondary_cta_text,
secondary_cta_url: csvRow.secondary_cta_url,
episode_slug: csvRow.episode_slug || generateSlug(csvRow.episode_title),
meta_description: csvRow.meta_description,
internal_status: csvRow.internal_status || 'draft',
internal_notes: csvRow.internal_notes,
assigned_editor: csvRow.assigned_editor,
// Array fields (semicolon separated)
topics_keywords: csvRow.topics_keywords ? csvRow.topics_keywords.split(';').map(s => s.trim()) : [],
hosts: csvRow.hosts ? csvRow.hosts.split(';').map(s => s.trim()) : [],
related_episodes: csvRow.related_episodes ? csvRow.related_episodes.split(';').map(s => s.trim()) : [],
merchandise_ideas: csvRow.merchandise_ideas ? csvRow.merchandise_ideas.split(';').map(s => s.trim()) : [],
// Boolean fields
publish_apple: parseBoolean(csvRow.publish_apple),
publish_spotify: parseBoolean(csvRow.publish_spotify),
publish_youtube: parseBoolean(csvRow.publish_youtube),
include_newsletter: parseBoolean(csvRow.include_newsletter),
// Complex structures (will be loaded from separate files)
guests: [],
chapters: [],
resources_links: []
};
return formData;
}
// Helper functions
function parseBoolean(value) {
if (typeof value === 'string') {
return value.toLowerCase() === 'true' || value === '1';
}
return Boolean(value);
}
function generateSlug(title) {
return title
.toLowerCase()
.replace(/[^a-z0-9\s-]/g, '')
.replace(/\s+/g, '-')
.replace(/-+/g, '-')
.trim();
}
// CSV Parser
function parseCSV(csvText) {
const lines = csvText.split('\n').filter(line => !line.startsWith('#') && line.trim() !== '');
if (lines.length < 2) return [];
const headers = lines[0].split(',').map(h => h.trim());
const episodes = [];
for (let i = 1; i < lines.length; i++) {
const values = parseCSVLine(lines[i]);
const episode = {};
headers.forEach((header, index) => {
if (values[index] !== undefined) {
episode[header] = values[index].trim();
}
});
episodes.push(mapCSVToFormData(episode));
}
return episodes;
}
// Parse CSV line with support for quoted values
function parseCSVLine(line) {
const values = [];
let currentValue = '';
let insideQuotes = false;
for (let i = 0; i < line.length; i++) {
const char = line[i];
if (char === '"') {
insideQuotes = !insideQuotes;
} else if (char === ',' && !insideQuotes) {
values.push(currentValue);
currentValue = '';
} else {
currentValue += char;
}
}
values.push(currentValue);
return values;
}import csv
import json
from datetime import datetime
def import_podcast_csv(csv_file_path):
episodes = []
with open(csv_file_path, 'r', encoding='utf-8') as file:
# Skip comment lines
lines = [line for line in file if not line.strip().startswith('#')]
reader = csv.DictReader(lines)
for row in reader:
episode = {
'episode_title': row['episode_title'],
'episode_subtitle': row.get('episode_subtitle', ''),
'season_number': int(row['season_number']) if row.get('season_number') else None,
'episode_number': int(row['episode_number']) if row.get('episode_number') else None,
'series_information': row.get('series_information', ''),
'publication_datetime': row['publication_datetime'],
'episode_duration': row.get('episode_duration', ''),
'explicit_content': row['explicit_content'],
'content_format': row['content_format'],
'primary_category': row['primary_category'],
'topics_keywords': [keyword.strip() for keyword in row.get('topics_keywords', '').split(';') if keyword.strip()],
'episode_summary': row['episode_summary'],
'show_notes': row.get('show_notes', ''),
'hosts': [host.strip() for host in row['hosts'].split(';') if host.strip()],
'primary_cta_text': row.get('primary_cta_text', ''),
'primary_cta_url': row.get('primary_cta_url', ''),
'secondary_cta_text': row.get('secondary_cta_text', ''),
'secondary_cta_url': row.get('secondary_cta_url', ''),
'episode_slug': row.get('episode_slug') or generate_slug(row['episode_title']),
'meta_description': row.get('meta_description', ''),
'publish_apple': row.get('publish_apple', 'TRUE').upper() == 'TRUE',
'publish_spotify': row.get('publish_spotify', 'TRUE').upper() == 'TRUE',
'publish_youtube': row.get('publish_youtube', 'FALSE').upper() == 'TRUE',
'include_newsletter': row.get('include_newsletter', 'TRUE').upper() == 'TRUE',
'related_episodes': [ep.strip() for ep in row.get('related_episodes', '').split(';') if ep.strip()],
'merchandise_ideas': [idea.strip() for idea in row.get('merchandise_ideas', '').split(';') if idea.strip()],
'internal_status': row.get('internal_status', 'draft'),
'internal_notes': row.get('internal_notes', ''),
'assigned_editor': row.get('assigned_editor', ''),
'guests': [],
'chapters': [],
'resources_links': [],
'imported_at': datetime.now().isoformat()
}
episodes.append(episode)
return episodes
def generate_slug(title):
"""Generate URL slug from title"""
import re
slug = title.lower()
slug = re.sub(r'[^a-z0-9\s-]', '', slug)
slug = re.sub(r'\s+', '-', slug)
slug = re.sub(r'-+', '-', slug)
return slug.strip('-')
# Export to JSON for API integration
def export_to_json(episodes, output_file):
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(episodes, f, indent=2, ensure_ascii=False)
# Example usage
if __name__ == "__main__":
episodes = import_podcast_csv('podcast_episodes.csv')
print(f"Imported {len(episodes)} episodes")
# Export to JSON
export_to_json(episodes, 'episodes_imported.json')# Save as podcast_episodes.csv// Using the web form
1. Click "Import from CSV"
2. Upload your CSV file
3. Review auto-populated data
4. Add missing complex data (guests, chapters)
5. Upload media files
6. Publish or save as draftFor multiple episodes:
# Create episodes_batch.csv with all episodes
# Run import script
episodes = import_podcast_csv('episodes_batch.csv')
for episode in episodes:
save_to_database(episode)| Field | Required | Format | Notes |
|---|---|---|---|
| episode_title | ✅ | Text | Max 200 chars |
| episode_subtitle | Text | Max 150 chars | |
| season_number | Integer | Positive number | |
| episode_number | Integer | Positive number | |
| publication_datetime | ✅ | ISO 8601 | YYYY-MM-DDTHH:MM:SS |
| explicit_content | ✅ | "yes"/"no" | Lowercase |
| content_format | ✅ | Predefined list | interview/deep-dive/etc. |
| primary_category | ✅ | Predefined list | Arts/Business/etc. |
| episode_summary | ✅ | Text | Can contain HTML |
| hosts | ✅ | Semicolon list | Existing host IDs |
| publish_apple | Boolean | TRUE/FALSE or 1/0 | |
| publish_spotify | Boolean | TRUE/FALSE or 1/0 | |
| internal_status | Text | draft/ready/published |
This CSV template provides a flexible way to import podcast episodes in bulk while maintaining compatibility with the web form structure.
Raw Paste