AIOPS/MLOPS | Introduction to AI | Sagar Kakkala´s World




πŸ” Understanding AI, Machine Learning, Deep Learning, and Generative AI

Artificial Intelligence (AI) is reshaping the way we live, work, and interact with technology. From voice assistants to intelligent image generation tools, AI is the invisible force behind many of today’s digital innovations.


🧠 AI, Machine Learning & Deep Learning – What’s the Difference?

πŸ€– Artificial Intelligence (AI)

AI refers to machines or systems that mimic human intelligence. This includes learning, reasoning, problem-solving, perception, and language understanding.

πŸ“Š Machine Learning (ML)

Machine Learning is a subset of AI. It enables machines to learn from data and improve their performance over time without being explicitly programmed.

Types of Machine Learning:

  • Supervised Learning: Trained on labeled data.
Example: You show the model many images of animals, each labeled (e.g., "Cat", "Dog"). If you show a picture of a Cat, labeled as "Cat", the model learns to recognize patterns associated with cats. Later, when you show a new, unlabeled picture of a cat, it predicts "Cat".

  • Unsupervised Learning: Trained on unlabeled data to find hidden patterns.
Example:You give the model a bunch of geometric shapes with no labels. The model notices some are similar (e.g., all squares), and clusters them together — it identifies that those shapes share features, even though no label was provided.

  • Semi-Supervised Learning: Uses a mix of labeled and unlabelled data. 
Example: You show the model a few labeled pictures of buses. Then you show it many more pictures without labels. Because it has learned some patterns from the labeled ones, it starts to correctly identify the unlabeled bus images.

  • Reinforcement Learning: Training Data based on rewards, more like feedbacks
ExampleThink of Reinforcement Learning as learning through interaction with an environment, unlike the others which rely purely on datasets.,It’s like a student taking a quiz without knowing the answers in advance. After every attempt, they get feedback — “Correct” or “Incorrect” — and use that to do better next time.

πŸ” Deep Learning

Deep Learning is a subset of Machine Learning, inspired by the structure and function of the human brain. It uses artificial neural networks — especially deep neural networks with many layers — to process data in complex ways.

πŸ–Ό️ 1. Image Recognition

Deep Learning models can identify and classify images, objects, or patterns in visual data.

  • Examples:

    • Face recognition on your phone (e.g., Apple Face ID)

    • Diagnosing diseases from medical scans (e.g., X-rays, MRIs)

    • Classifying objects in photos (e.g., Google Photos auto-tagging)

πŸ”Š 2. Speech Recognition

Deep Learning enables machines to understand spoken language by converting audio into text.

  • Examples:

    • Virtual assistants like Siri, Alexa, and Google Assistant

    • Transcribing meetings (e.g., Zoom Live Transcription)

    • Voice-to-text features on smartphones

πŸ—£️ 3. Natural Language Processing (NLP)

This involves understanding, interpreting, and generating human language.

  • Examples:

    • Chatbots (e.g., ChatGPT)

    • Language translation (e.g., Google Translate)

    • Email autocomplete (e.g., Gmail Smart Compose)

πŸš— 4. Self-Driving Cars

Deep Learning helps autonomous vehicles understand their environment using cameras, LiDAR, and sensors.

  • Examples:

    • Lane detection and object avoidance in Tesla Autopilot

    • Traffic sign recognition

    • Predicting pedestrian movement

🌟 Generative AI – The Cutting Edge of Deep Learning

Generative AI (GenAI) is a powerful subset of Deep Learning focused on creating new content — whether it's text, images, audio, video, or code — based on patterns it has learned from massive datasets.

Examples:

  • Text-to-Image: You type "Create a modern tech company logo," and it generates unique logo designs.

  • Text-to-Video: You describe a scene, and it generates a short video (e.g., using Runway or Sora).

  • Image Editing: You upload a rough logo sketch, and the AI enhances or reimagines it professionally.

  • Text-to-Code: You describe a function in English, and tools like GitHub Copilot write the code for it.


πŸ“Œ AI Hierarchy Diagram

Here's a simple diagram to help you understand the relationship between AI, Machine Learning, Deep Learning, and Generative AI:

Quick Notes on How Tokens Work

  • What is a Token?
    A token is a piece of text that the language model processes. It can be as small as a character or as large as a word or subword.

  • Tokenization
    The process of breaking down text into tokens. For example:

    • "ChatGPT is awesome!" might tokenize to: ["Chat", "G", "PT", " is", " awesome", "!"] depending on the tokenizer.

    • You can verify it here - OpenAI tokenizer

  • Why Tokens Matter

    • Models like GPT don’t read text as words but as tokens.

    • The length of input and output is measured in tokens, not characters or words.

  • Token Limits

    • Each model has a max token limit for prompt + completion (e.g., GPT-4 might have 8,192 tokens max).

    • If the token count exceeds the limit, the model will truncate or refuse to process the input fully.

  • Token Cost & Efficiency

    • Tokens are what determine the cost and speed of processing.

    • Efficient prompts minimize token usage to save costs and speed up responses.

  • Also Tokens work in a way to complete your Inputs , like you can give a prompt like "Paris is a | complete the sentence " and it will try to fill the sentence with next possible matching Token and finish the sentence

🧠 Prompt Engineering & Hallucinations in LLMs

Prompt Engineering is the skill of crafting precise and structured prompts to get the most accurate and useful outputs from a Large Language Model (LLM) like ChatGPT, Claude, Gemini, or others.

How a model understands your input is largely shaped by the way the prompt is written. A poorly structured prompt can lead to incorrect or nonsensical answers — often referred to as hallucinations.


🎯 Why Prompt Engineering Matters

  • ✔️ Helps guide the model to generate more accurate, relevant, and creative responses
  • ✔️ Reduces the risk of incorrect or hallucinated information
  • ✔️ Enables better control over tone, structure, and format of output

πŸ—‚️ Key Types of Prompts

1. πŸ“Œ Explicit Prompt

A clear and detailed prompt that tells the model exactly what you want.

✅ Example: Write a short story about a girl riding a horse through stormy weather to meet her friend.
❌ Poor Prompt: Write a story about a girl.

More detail = Better output 🎯


2. πŸ’¬ Conversational Prompt

These mimic natural conversation, like talking to a chatbot.

Example: "Tell me a joke."
The response will often lead to further dialogue or follow-up questions, creating an interactive experience.


3. 🧾 Instructional Prompt

Gives the model specific instructions or structure to follow.

Example:

Write a detailed blog post discussing the benefits and drawbacks of renewable energy sources.
Structure:
- Introduction
- Body
- Conclusion

4. πŸ“š Context-Based Prompt

These include sufficient background or "context" to help the model give relevant and accurate answers.

Example: After providing a paragraph about climate change, ask: "How does this impact low-lying countries like Bangladesh?"

Tip: The more context you feed, the better the answer.


5. 🎨 Open-Ended Prompt

Vague or general prompts that allow the LLM to be creative.

Example: Tell me about a girl.

These prompts don’t provide context, so the model invents details.


6. ⚖️ Bias-Mitigation Prompt

Used to request objective, balanced responses on sensitive topics.

Example: Generate a neutral perspective on caste-based reservations in India, avoiding support for any specific caste or group.


7. πŸ’» Code-Generation Prompt

Instructs the model to generate code.

Example: Write a Python program to create a simple calculator.


🚨 What Are Hallucinations in LLMs?

Hallucinations occur when a language model generates output that is factually incorrect or entirely made up, even if it sounds convincing.

❗ Why It Happens:

  • The model wasn't trained on domain-specific data (e.g., medical/legal)
  • Poorly constructed or vague prompts
  • Overgeneralization or filling in gaps with made-up content

Example of Hallucination:

Asking, "What is the weather in Amsterdam”, when it does not have access to data or no context is provided 

πŸ› ️ How to Avoid Hallucinations:

  • ✔️ Use well-structured and specific prompts
  • ✔️ Provide clear context and examples
  • ✔️ Fine-tune the model with domain-specific data
  • ✔️ Validate responses with trusted sources when needed

🧠 Key Prompting Techniques

1. πŸ•³️ Zero-Shot Prompting

Ask the model to perform a task with no prior example or context.

Example: "Write a poem about Paris."


2. πŸ”‚ One-Shot Prompting

Give the model one example before asking it to generate more.

Example:

Example: "Translate 'Bonjour' to English: Hello."
Now: "Translate 'Gracias' to English."

3. πŸ“ˆ Few-Shot Prompting

Provide the model with multiple examples so it learns the pattern better.

Example:

Translate "Bonjour" → "Hello"
Translate "Gracias" → "Thank you"
Translate "Ciao" → ?

 Why Fine-Tune?

Fine-tuning is used when:
  • You want a model to use a specific tone or style (e.g., customer support tone).
  • You need better performance on narrow tasks (e.g., summarizing legal documents, classifying tech support tickets).
  • You want to include domain-specific knowledge (e.g., healthcare terms, engineering jargon).

🧠 How It Works (Simplified Steps)
  • Start with a pre-trained model (like GPT-3.5 or GPT-4).
  • Prepare a dataset of examples that show the desired input and output.
  • Format: Prompt → Expected Completion.
  • Train (fine-tune) the model on this custom dataset.
  • Evaluate and test the fine-tuned model to ensure it behaves as expected.

Comments

  1. thank you @ sagar

    ReplyDelete
    Replies
    1. You are most welcome brother, make sure you watch the remaining series too

      Delete

Post a Comment