Skip to content

AI Prompt Task

Overview

The AI Prompt Task uses artificial intelligence to generate content, analyze text, extract information, classify data, or answer questions. Powered by OpenAI's GPT models, it brings advanced language understanding to your workflows.

When to use this task:

  • Generate personalized email content
  • Analyze customer sentiment
  • Extract structured data from unstructured text
  • Classify leads or support tickets
  • Summarize long content
  • Answer customer questions
  • Translate languages
  • Create product descriptions
  • Draft responses or proposals

Key Features:

  • GPT-4 and GPT-3.5 Turbo models
  • Custom prompt templates
  • Context from previous tasks
  • JSON-structured responses
  • Temperature control for creativity
  • Token limit management
  • Cost-effective operation
  • Reliable error handling

[SCREENSHOT NEEDED: AI Prompt task configuration showing prompt template and model selection]

Quick Start

  1. Add AI Prompt task
  2. Select AI model
  3. Write prompt with context
  4. Configure response format
  5. Test with sample data
  6. Save

Simple Example:

Prompt: Classify this lead as Hot, Warm, or Cold based on:
- Email: {{task_49001_email}}
- Message: {{task_49001_message}}
- Company: {{task_49001_company}}

Output: {{task_38001_response}}

Configuring the AI Task

Model Selection

GPT-4 Turbo (Recommended for complex tasks) - More accurate and nuanced - Better reasoning - Handles complex instructions - Higher cost per token

GPT-3.5 Turbo (Recommended for simple tasks) - Fast responses - Cost-effective - Good for straightforward tasks - Lower cost per token

Selection Guide: - Use GPT-4 for: Analysis, complex extraction, nuanced writing - Use GPT-3.5 for: Classification, simple generation, formatting

Writing Effective Prompts

Structure: 1. Context - What information does AI need? 2. Task - What should AI do? 3. Format - How should AI respond? 4. Constraints - Any rules or limitations?

Example:

Context: You are a sales assistant analyzing leads.

Task: Classify this lead based on the information provided:
- Name: {{task_49001_name}}
- Email: {{task_49001_email}}
- Company: {{task_49001_company}}
- Message: {{task_49001_message}}

Format: Respond with only: Hot, Warm, or Cold

Constraints:
- Hot = Has company email + mentions budget/timeline
- Warm = Has company email or shows clear interest
- Cold = Generic inquiry or personal email

Using Dynamic Context

Include data from previous tasks:

Analyze this customer support ticket:

Subject: {{task_46001_subject}}
From: {{task_46001_email}}
Message: {{task_46001_body}}

Customer history:
- Total orders: {{task_43001_order_count}}
- LTV: ${{task_43001_lifetime_value}}
- Last contact: {{task_43001_last_contact_date}}

Determine the urgency (High/Medium/Low) and suggest a response category (Refund/Support/Sales).

Response Format Options

Plain Text:

Output: {{task_38001_response}}

JSON Structure:

Respond in JSON format:
{
  "category": "Support",
  "urgency": "High",
  "summary": "Brief description"
}

Access: {{task_38001_category}}, {{task_38001_urgency}}

List:

Provide 3 bullet points.

Access: {{task_38001_response}}

Temperature Setting

Controls creativity vs consistency:

Temperature Behavior Use For
0.0 - 0.3 Deterministic, consistent Classification, extraction, analysis
0.4 - 0.7 Balanced General content generation
0.8 - 1.0 Creative, varied Marketing copy, creative writing

Configuration:

Temperature: 0.2 (for consistent classification)
Temperature: 0.7 (for creative email content)

Token Limits

Max Tokens: Controls response length - Short responses: 100-200 tokens - Paragraphs: 300-500 tokens - Long content: 1000-2000 tokens

Note: ~4 characters = 1 token on average

Output Fields

Field Description Example
task_38001_response Full AI response Full generated text
task_38001_tokens_used Tokens consumed 327
task_38001_model Model used gpt-4-turbo
task_38001_[field] JSON fields (if structured) task_38001_category

Real-World Examples

Example 1: Lead Qualification & Scoring

Workflow: 1. Website Form - Contact form 2. AI Prompt - Analyze and score lead 3. Variable - Store score 4. If Task - Route based on score 5a. Hot lead → Email sales manager 5b. Warm lead → Add to nurture 5c. Cold lead → Auto-responder only

AI Prompt Configuration:

Model: GPT-4 Turbo
Temperature: 0.2

Prompt:
You are a B2B lead qualification expert. Analyze this lead and provide a qualification score.

Lead Information:
- Name: {{task_49001_name}}
- Email: {{task_49001_email}}
- Company: {{task_49001_company}}
- Phone: {{task_49001_phone}}
- Message: {{task_49001_message}}
- Source: {{task_49001_utm_source}}

Respond in JSON format:
{
  "score": 0-100,
  "classification": "Hot/Warm/Cold",
  "reasoning": "Brief explanation",
  "recommended_action": "Immediate call/Email follow-up/Nurture campaign",
  "key_signals": ["signal1", "signal2"]
}

Scoring criteria:
- Business email domain (not Gmail/Yahoo): +25
- Company name provided: +15
- Phone number provided: +10
- Message mentions budget: +20
- Message mentions timeline: +20
- Specific product/service mentioned: +10

Variable Task:

Variable Name: lead_score
Value: {{task_38001_score}}

If Task:

Condition: {{task_38001_classification}} = "Hot"
True: Send immediate alert to sales manager

Email to Sales Manager:

Subject: 🔥 Hot Lead: {{task_49001_name}} - Score: {{task_38001_score}}

A high-value lead just came in!

Contact: {{task_49001_name}}
Company: {{task_49001_company}}
Score: {{task_38001_score}}/100

AI Analysis: {{task_38001_reasoning}}

Key Signals:
{{task_38001_key_signals}}

Recommended Action: {{task_38001_recommended_action}}

View in CRM: https://app.basecloud.com/contacts/{{task_15001_contact_id}}

Example 2: Customer Sentiment Analysis

Workflow: 1. Email Trigger - Customer email received 2. AI Prompt - Analyze sentiment 3. If Task - Check if negative 4. Match to Client - Find customer 5. Edit Client - Tag if negative 6. Email - Alert manager for negative sentiment

AI Prompt:

Model: GPT-4 Turbo
Temperature: 0.1

Prompt:
Analyze the sentiment of this customer email.

From: {{task_50001_from_email}}
Subject: {{task_50001_subject}}
Body: {{task_50001_body}}

Respond in JSON:
{
  "sentiment": "Positive/Neutral/Negative",
  "sentiment_score": -10 to 10,
  "urgency": "High/Medium/Low",
  "primary_emotion": "Happy/Frustrated/Angry/Confused/Neutral",
  "key_issues": ["issue1", "issue2"],
  "requires_immediate_attention": true/false,
  "summary": "One sentence summary"
}

Consider:
- Tone and language used
- Presence of complaints or praise
- Urgency indicators (ASAP, urgent, immediately)
- Threats to cancel or leave
- Positive feedback or thanks

If Task:

Condition: {{task_38001_requires_immediate_attention}} = true
True: Alert manager and tag customer

Edit Client:

Contact ID: {{task_15001_contact_id}}
Add Tag: Urgent - Negative Sentiment
Add Note: AI Sentiment Analysis ({{task_48001_current_date}}):
Sentiment: {{task_38001_sentiment}} ({{task_38001_sentiment_score}}/10)
Primary Emotion: {{task_38001_primary_emotion}}
Issues: {{task_38001_key_issues}}
Summary: {{task_38001_summary}}

Example 3: Structured Data Extraction

Workflow: 1. Email Trigger - Invoice/order email 2. AI Prompt - Extract order details 3. MySQL Query - Insert into database 4. Match to Client - Link to customer 5. Email - Confirmation

AI Prompt:

Model: GPT-3.5 Turbo (cost-effective for extraction)
Temperature: 0.0

Prompt:
Extract order information from this email.

Email Body:
{{task_50001_body}}

Respond in JSON format with these exact fields:
{
  "order_number": "string or null",
  "order_date": "YYYY-MM-DD or null",
  "customer_name": "string or null",
  "customer_email": "string or null",
  "items": [
    {
      "product_name": "string",
      "quantity": number,
      "price": number
    }
  ],
  "total_amount": number or null,
  "currency": "USD/EUR/GBP/ZAR or null",
  "shipping_address": "string or null"
}

Rules:
- Extract only information explicitly stated
- Use null if information not found
- Convert dates to YYYY-MM-DD format
- Convert prices to numbers (remove currency symbols)

MySQL Query:

INSERT INTO orders (
  order_number, order_date, customer_email,
  items_json, total_amount, currency, source
) VALUES (?, ?, ?, ?, ?, ?, 'email')

Parameters:

{{task_38001_order_number}}
{{task_38001_order_date}}
{{task_38001_customer_email}}
{{task_38001_items}}
{{task_38001_total_amount}}
{{task_38001_currency}}

Example 4: Content Generation for Personalized Emails

Workflow: 1. Schedule Trigger - Monthly newsletter 2. MySQL Query - Get customers with interests 3. Loop - For each customer 4. AI Prompt - Generate personalized content 5. Email - Send personalized newsletter

AI Prompt (inside loop):

Model: GPT-4 Turbo
Temperature: 0.7 (creative but controlled)

Prompt:
Write a personalized email section for this customer.

Customer Profile:
- Name: {{task_29001_first_name}}
- Industry: {{task_29001_custom_industry}}
- Interests: {{task_29001_custom_interests}}
- Products Used: {{task_29001_custom_products}}
- Last Purchase: {{task_29001_custom_last_purchase_date}}

Company News:
- New Feature: AI-powered analytics dashboard
- Case Study: Manufacturing company increased efficiency 40%
- Upcoming Webinar: Industry best practices

Task: Write a 2-paragraph personalized section that:
1. Addresses the customer by name and references their industry/interests
2. Highlights the company news items most relevant to them
3. Includes a clear call-to-action
4. Maintains professional but friendly tone

Keep it under 150 words.

Email Task:

To: {{task_29001_email}}
Subject: {{task_29001_first_name}}, Check Out What's New

Hi {{task_29001_first_name}},

{{task_38001_response}}

Best regards,
The Team

Example 5: Automatic Response Classification & Routing

Workflow: 1. Webhook In - Support ticket submitted 2. AI Prompt - Classify ticket 3. If Task - Route based on category 4a. Technical → Engineering team 4b. Billing → Finance team 4c. Sales → Sales team 4d. General → Support team

AI Prompt:

Model: GPT-3.5 Turbo
Temperature: 0.1

Prompt:
Classify this support ticket into the correct department.

Ticket Details:
Subject: {{task_46001_subject}}
From: {{task_46001_customer_email}}
Message: {{task_46001_message}}
Priority: {{task_46001_priority}}

Respond in JSON:
{
  "department": "Technical/Billing/Sales/General",
  "subcategory": "Specific issue type",
  "estimated_complexity": "Simple/Medium/Complex",
  "suggested_priority": "Low/Medium/High/Urgent",
  "requires_technical_expertise": true/false,
  "keywords": ["keyword1", "keyword2"],
  "routing_reason": "Brief explanation"
}

Classification Rules:
- Technical: Bugs, errors, integrations, API issues, performance
- Billing: Invoices, payments, refunds, pricing, subscriptions
- Sales: Upgrades, demos, new features, quotes, trials
- General: Questions, feedback, how-to, documentation

If Task 1:

Condition: {{task_38001_department}} = "Technical"
True: Assign to engineering team with high priority

If Task 2:

Condition: {{task_38001_requires_technical_expertise}} = true
True: Add "Technical Review Needed" flag

Advanced Techniques

Chain of Thought Prompting

For complex reasoning:

Prompt:
Think step-by-step to analyze this deal:

Deal Info: [data]

Step 1: List all positive signals
Step 2: List all red flags
Step 3: Compare to typical winning deals
Step 4: Provide final recommendation with confidence score

Few-Shot Learning

Provide examples in prompt:

Classify these leads:

Examples:
Input: "CEO of Acme Corp, interested in enterprise plan, budget approved"
Output: Hot

Input: "Student asking about features"
Output: Cold

Now classify:
Input: {{task_49001_message}}
Output:

Response Validation

Use Code task after AI:

const response = input.task_38001_response;
const validCategories = ['Hot', 'Warm', 'Cold'];

if (!validCategories.includes(response)) {
  return { 
    validated: false, 
    category: 'Cold' // Default fallback
  };
}

return { validated: true, category: response };

Best Practices

Prompt Engineering

  1. Be specific - Clear instructions yield better results
  2. Provide context - More context = more accurate
  3. Define format - Specify exact output structure
  4. Include constraints - Set boundaries and rules
  5. Test variations - Iterate to find best prompt

Cost Management

  1. Choose right model - GPT-3.5 for simple, GPT-4 for complex
  2. Limit tokens - Set max_tokens appropriately
  3. Reduce temperature for consistency - Fewer retries
  4. Cache common prompts - Use Variable task
  5. Batch similar requests - Process in loops

Reliability

  1. Always validate - Check AI response format
  2. Provide defaults - Fallback values if AI fails
  3. Handle errors - Use If task to check success
  4. Don't trust 100% - Human review for critical decisions
  5. Log responses - Track for quality monitoring

Security

  1. Sanitize inputs - Clean user data before AI
  2. Don't expose sensitive data - Mask PII when possible
  3. Validate outputs - Check for injection attempts
  4. Rate limit - Prevent abuse
  5. Monitor costs - Set alerts for unusual usage

Troubleshooting

Empty or Invalid Response

Check: 1. Prompt clear and specific? 2. Input data contains expected fields? 3. Model appropriate for task complexity? 4. Token limit sufficient?

Debug: View execution history → AI Prompt task → See full request/response

Inconsistent Classifications

Cause: Temperature too high or prompt ambiguous

Solution: - Lower temperature to 0.0-0.2 - Add explicit classification rules - Provide examples in prompt

Response Not in Expected Format

Issue: Asked for JSON, got plain text

Solution:

Prompt: You MUST respond with valid JSON only. 
Do not include any text before or after the JSON.

{
  "field": "value"
}

Alternative: Use Code task to parse and clean response

Timeout or Slow Response

Causes: - GPT-4 slower than GPT-3.5 - Very long prompts - Max tokens set too high

Solutions: - Switch to GPT-3.5 for simple tasks - Reduce input context length - Lower max_tokens setting

Costs Higher Than Expected

Check: - Using GPT-4 when GPT-3.5 sufficient? - Token limits set too high? - Running in high-frequency loops? - Inefficient prompts?

Optimize: - Use GPT-3.5 for classification/extraction - Set appropriate max_tokens - Cache repeated AI calls - Simplify prompts

Frequently Asked Questions

What models are available?

  • GPT-4 Turbo - Most capable, best for complex tasks
  • GPT-3.5 Turbo - Fast and cost-effective

How much does each API call cost?

Pricing varies by model and token usage: - GPT-3.5: ~\(0.002 per 1K tokens - GPT-4: ~\)0.03 per 1K tokens

Check OpenAI pricing for current rates.

Can AI access external data?

No, AI only sees data you include in the prompt from previous workflow tasks.

How accurate is the AI?

Depends on: - Task complexity - Prompt quality - Model selection - Input data quality

Always validate critical outputs.

Can I use my own OpenAI API key?

Check with BaseCloud support for custom API key configuration.

Is there a rate limit?

Yes, reasonable rate limits apply. Contact support for high-volume needs.

Can AI generate images?

This task is text-only. Image generation requires separate DALL-E integration.

How do I structure JSON responses?

Explicitly define schema in prompt:

Respond with this exact JSON structure:
{
  "field1": "value",
  "field2": number
}


  • Code Task - Process AI responses
  • If Task - Route based on AI classification
  • Variable Task - Cache AI results
  • Formatter Task - Clean AI output
  • Email Task - Send AI-generated content