Azure AI Foundry Templates Implementation Guide

A comprehensive guide for implementing powerful Azure AI Foundry templates

Template Summaries

Get Started with AI Chat

A web-based chat application that allows users to interact with Azure OpenAI models. It provides a foundation for building conversational AI experiences with optional Retrieval-Augmented Generation (RAG) capabilities.

Key Features:

  • Web-based chat interface
  • Optional RAG capabilities using Azure AI Search
  • Built-in monitoring and tracing
  • Secure deployment with managed identity
Implementation Guide

Get Started with AI Agents

A web-based chat application with an AI agent that can perform tasks and retrieve information from uploaded files. It leverages Azure AI Agent service and Azure AI Search for knowledge retrieval.

Key Features:

  • Web-based chat interface with AI agent capabilities
  • Knowledge retrieval from uploaded files with citations
  • Built-in monitoring for troubleshooting
  • Secure deployment with managed identity
Implementation Guide

Conversation Knowledge Mining

A solution that helps organizations extract actionable insights from large volumes of conversational data by identifying key themes, patterns, and relationships in unstructured dialogue.

Key Features:

  • Entity and relationship extraction from unstructured data
  • Processing of conversation data at scale with vector embeddings
  • Interactive dashboard with data visualizations
  • Natural language interaction for querying insights
  • Support for audio and text inputs
Implementation Guide

AI Chat Implementation Guide

Prerequisites

  • Azure subscription
  • Appropriate permissions (Role-Based Access Control Administrator, User Access Administrator, or Owner role)
  • Development environment (GitHub Codespaces, VS Code Dev Containers, or local setup)

Step 1: Set Up Your Development Environment

Option A: GitHub Codespaces (Recommended for Quick Start)

  1. Navigate to the get-started-with-ai-chat repository
  2. Click the "Code" button and select "Open with Codespaces"
  3. Click "Create codespace"

Option B: Local Development Environment

Install the required tools:

# Install Azure Developer CLI
curl -fsSL https://aka.ms/install-azd.sh | bash

# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Ensure Docker is installed
docker --version

Step 2: Clone the Repository (Local Development Only)

git clone https://github.com/Azure-Samples/get-started-with-ai-chat.git
cd get-started-with-ai-chat

Step 3: Initialize the Azure Developer CLI Project

# Login to Azure
azd auth login

# Initialize the project with a new environment name
azd init --environment myaichat

Step 4: Deploy the Application to Azure

# Deploy all resources to Azure
azd up

Step 5: Understanding the Key Components

The application consists of:

  1. Backend API (Python/FastAPI): Located in the src/api directory
  2. Frontend Web Interface: Served by the backend
  3. Azure Resources: AI models, storage, and other services

Chat Completion API Endpoint

@router.post("/chat")
async def chat_endpoint(request: ChatRequest) -> ChatResponse:
    """
    Process a chat request and return a response from the AI model.
    """
    try:
        # Get the chat client
        client = get_chat_client()
        
        # Process the request
        response = await client.get_chat_completion(
            messages=request.messages,
            temperature=request.temperature,
            top_p=request.top_p,
            max_tokens=request.max_tokens,
            stream=request.stream
        )
        
        return ChatResponse(
            message=response.message,
            usage=response.usage
        )
    except Exception as e:
        # Handle errors
        logger.error(f"Error in chat endpoint: {str(e)}")
        raise HTTPException(status_code=500, detail=str(e))

Step 6: Customizing the Application

Modifying the AI Model

Update the deployment parameters in the infra/main.bicep file:

// Find the AI model deployment section
resource openAIDeployment 'Microsoft.CognitiveServices/accounts/deployments@2023-05-01' = {
  parent: openAI
  name: openAIModelDeploymentName
  properties: {
    model: {
      format: 'OpenAI'
      name: openAIModelName  // Change this to your preferred model
    }
    // Other properties...
  }
}

Enabling RAG Capabilities

Update the infra/main.parameters.json file:

{
  "parameters": {
    "enableSearch": {
      "value": true
    }
  }
}

Step 7: Testing the Application

# Get the application URL
azd show --urls

Step 8: Monitoring and Troubleshooting

# View application logs
azd monitor

Step 9: Cleaning Up Resources

# Remove all deployed resources
azd down

AI Agents Implementation Guide

Prerequisites

  • Azure subscription
  • Appropriate permissions (Role-Based Access Control Administrator, User Access Administrator, or Owner role)
  • Development environment (GitHub Codespaces, VS Code Dev Containers, or local setup)

Step 1: Set Up Your Development Environment

Option A: GitHub Codespaces (Recommended for Quick Start)

  1. Navigate to the get-started-with-ai-agents repository
  2. Click the "Code" button and select "Open with Codespaces"
  3. Click "Create codespace"

Option B: Local Development Environment

Install the required tools:

# Install Azure Developer CLI
curl -fsSL https://aka.ms/install-azd.sh | bash

# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Ensure Docker is installed
docker --version

Step 2: Clone the Repository (Local Development Only)

git clone https://github.com/Azure-Samples/get-started-with-ai-agents.git
cd get-started-with-ai-agents

Step 3: Initialize the Azure Developer CLI Project

# Login to Azure
azd auth login

# Initialize the project with a new environment name
azd init --environment myaiagent

Step 4: Deploy the Application to Azure

# Deploy all resources to Azure
azd up

Step 5: Understanding the Key Components

The application consists of:

  1. Backend API (Python/FastAPI): Located in the src/api directory
  2. Frontend Web Interface: Located in the src/frontend directory
  3. Data Files: Sample data in the src/data directory
  4. Azure Resources: AI models, search service, and other services

Agent Configuration

@router.post("/agent")
async def agent_endpoint(request: AgentRequest) -> AgentResponse:
    """
    Process an agent request and return a response.
    """
    try:
        # Get the agent client
        client = get_agent_client()
        
        # Process the request
        response = await client.run_agent(
            messages=request.messages,
            tools=get_available_tools(),
            temperature=request.temperature
        )
        
        return AgentResponse(
            message=response.message,
            citations=response.citations,
            usage=response.usage
        )
    except Exception as e:
        # Handle errors
        logger.error(f"Error in agent endpoint: {str(e)}")
        raise HTTPException(status_code=500, detail=str(e))

File Upload and Search Integration

@router.post("/upload")
async def upload_file(file: UploadFile) -> FileUploadResponse:
    """
    Upload a file for knowledge retrieval.
    """
    try:
        # Save the file
        file_id = await save_file(file)
        
        # Index the file content for search
        await index_file_content(file_id, file)
        
        return FileUploadResponse(
            file_id=file_id,
            status="success",
            message="File uploaded and indexed successfully"
        )
    except Exception as e:
        logger.error(f"Error uploading file: {str(e)}")
        raise HTTPException(status_code=500, detail=str(e))

Step 6: Customizing the Application

Modifying the Agent Behavior

Customize the tools available to the agent in src/api/tools/tool_registry.py:

def get_available_tools() -> List[Tool]:
    """
    Return the list of tools available to the agent.
    """
    return [
        Tool(
            name="search_knowledge_base",
            description="Search the knowledge base for information",
            parameters={
                "query": {
                    "type": "string",
                    "description": "The search query"
                }
            },
            function=search_knowledge_base
        ),
        # Add your custom tools here
    ]

Step 7: Testing the Application

# Get the application URL
azd show --urls

To test the agent with file uploads:

  1. Click the "Upload Files" button
  2. Select a document (PDF, DOCX, TXT, etc.)
  3. Wait for the file to be processed and indexed
  4. Ask questions about the content of the uploaded document

Step 8: Monitoring and Troubleshooting

# View application logs
azd monitor

Step 9: Cleaning Up Resources

# Remove all deployed resources
azd down

Conversation Knowledge Mining Implementation Guide

Prerequisites

  • Azure subscription
  • Appropriate permissions to create and manage resources
  • Development environment with Git installed

Step 1: Clone the Repository

git clone https://github.com/microsoft/Conversation-Knowledge-Mining-Solution-Accelerator.git
cd Conversation-Knowledge-Mining-Solution-Accelerator

Step 2: Review the Solution Architecture

Before deployment, familiarize yourself with the solution architecture:

  1. Data Ingestion: Processes for ingesting audio and text conversations
  2. Processing Pipeline: Components that analyze and extract insights from conversations
  3. Knowledge Base: Storage for structured insights and relationships
  4. Vector Store: For efficient retrieval of conversation data
  5. Web Frontend: Interface for exploring insights through natural language

Step 3: Deploy the Infrastructure

# Login to Azure
azd auth login

# Initialize the project with a new environment name
azd init --environment myconvomining

# Deploy all resources to Azure
azd up

Step 4: Understanding the Key Components

The application consists of:

  1. Backend API (Python/FastAPI): Located in the src/api directory
  2. Frontend Web Application: Located in the src/App directory
  3. Data Processing Pipelines: Components for processing conversational data

Conversation Processing Pipeline

def process_conversation(conversation_data: dict) -> dict:
    """
    Process a conversation to extract insights.
    """
    # Extract basic metadata
    conversation_id = conversation_data.get("id", str(uuid.uuid4()))
    
    # Process text content
    text_content = conversation_data.get("text", "")
    if text_content:
        # Extract entities
        entities = extract_entities(text_content)
        
        # Extract key phrases
        key_phrases = extract_key_phrases(text_content)
        
        # Perform sentiment analysis
        sentiment = analyze_sentiment(text_content)
        
        # Generate embeddings for vector search
        embeddings = generate_embeddings(text_content)
    
    # Process audio content if available
    audio_url = conversation_data.get("audio_url")
    if audio_url:
        # Transcribe audio to text
        transcription = transcribe_audio(audio_url)
        text_content = transcription.text
        
        # Process the transcribed text
        # (similar to text processing above)
    
    # Store processed data
    processed_data = {
        "id": conversation_id,
        "text": text_content,
        "entities": entities,
        "key_phrases": key_phrases,
        "sentiment": sentiment,
        "embeddings": embeddings,
        "metadata": conversation_data.get("metadata", {})
    }
    
    # Store in database
    store_processed_conversation(processed_data)
    
    return processed_data

Natural Language Query Interface

@router.post("/query")
async def query_conversations(request: QueryRequest) -> QueryResponse:
    """
    Process a natural language query about conversations.
    """
    try:
        # Get the query client
        client = get_query_client()
        
        # Process the natural language query
        query_text = request.question
        
        # Generate embeddings for the query
        query_embedding = generate_embeddings(query_text)
        
        # Retrieve relevant conversations using vector search
        relevant_conversations = vector_search(
            embedding=query_embedding,
            top_k=request.top_k or 5
        )
        
        # Generate a response using Azure OpenAI
        response = generate_response(
            query=query_text,
            context=relevant_conversations,
            temperature=request.temperature or 0.7
        )
        
        # Format citations and evidence
        citations = format_citations(relevant_conversations)
        
        return QueryResponse(
            answer=response,
            citations=citations,
            conversations=relevant_conversations
        )
    except Exception as e:
        logger.error(f"Error processing query: {str(e)}")
        raise HTTPException(status_code=500, detail=str(e))

Step 5: Customizing the Solution

Adding Custom Entity Types

Modify the entity extraction configuration in src/api/services/entity_extraction.py:

def configure_custom_entity_types():
    """
    Configure custom entity types for extraction.
    """
    return [
        {
            "name": "ProductName",
            "description": "Names of products mentioned in conversations",
            "examples": ["Product A", "Product B", "Service X"]
        },
        {
            "name": "IssueType",
            "description": "Types of issues mentioned in conversations",
            "examples": ["login problem", "payment failure", "slow performance"]
        },
        # Add your custom entity types here
    ]

Step 6: Loading Sample Data

# Navigate to the sample data directory
cd workshop/data

# Run the data loading script
python load_sample_data.py

Step 7: Testing the Solution

# Get the application URL
azd show --urls

To test the natural language query interface:

  1. Click on the "Ask Questions" tab
  2. Type a question about the conversation data
  3. Review the response and supporting evidence

Step 8: Monitoring and Troubleshooting

# View application logs
azd monitor

Step 9: Cleaning Up Resources

# Remove all deployed resources
azd down

Additional Resources