Generative AI Overview

Understanding Generative AI (GenAI)

Artificial Intelligence (AI): Any technique enabling computers to mimic human intelligence using logic.

Machine Learning (ML): A subset of AI where machines search for patterns in data to draw inferences.

Generative AI (GenAI): An advanced AI technology capable of generating original content – such as text, stories, images, or video – similar to human-created content for real-world tasks. GenAI can rapidly analyze vast amounts of data to provide capabilities like text summarization, question answering, and code generation.

Foundational Models (FMs): Core data models that power GenAI technology which are pre-trained on vast datasets with hundreds of billions of parameters, e.g. GPT.

Large Language Model (LLM): A type of Foundational Model that can process inputs in a variety of formats and generate text content in response. LLMs have the ability to continuously learn and improve over time based on user feedback. Examples include Meta’s Llama, Anthropic’s Claude, and Amazon’s Nova.

What Can GenAI Do?

  • Rapidly analyze vast amounts of data to provide capabilities such as text summarization, question answering, and code generation.
  • Produce original content that closely resembles human-generated content for real-world tasks in response to natural language inputs.
  • Continuously learn from data inputs during inference, enabling the development of comprehensive outputs through carefully curated prompts.
  • Aid human decision-making by inferring and improving predictions.
  • Translate languages or convert speech to text.
  • Have applications in a wide variety of domains with Fine-Tuning or Retrieval Augmented Generation (RAG).

GenAI Concerns

  • Resources: Requires huge amounts of storage & compute power
  • Hallucination and Bias: When AI generates inaccurate information that users may mistakenly believe is true, or introduces bias into its responses
  • Sustainability: Concerns include the impact on energy usage and promoting ethical practices
  • Security/Privacy Risks: Requires guardrails for data security, privacy, access policies, content filtering

GenAI Doesn't Have to be a Black Box

  • Grounded Responses, Not Guesswork
    GenAI systems should be designed to rely on trusted enterprise data using retrieval-augmented generation (RAG), reducing hallucinations and improving answer accuracy.
  • Transparent Architecture and Data Flows
    The AI architecture and implementation need to be clear and well-documented so teams understand where data comes from, how it’s processed, and how outputs are generated.
  • Human-in-the-Loop Controls
    Critical workflows should include review, approval, and feedback loops, ensuring AI outputs are auditable and aligned with business and regulatory requirements.
  • Model Choice Based on Use Case, Not Hype
    Appropriate LLMs, SLMs, or task-specific models should be selected to balance accuracy, latency, cost, and energy consumption.
  • Built-In Guardrails and Policy Enforcement
    Native cloud controls should be used to apply safety, security, and compliance guardrails for managing prompt behavior, data access, and output boundaries.
  • Observability and Continuous Monitoring
    AI systems should be instrumented with logging, metrics, and tracing to monitor performance, drift, and cost in real time.