0:00
/
0:00
Transcript

🧠 Build AI Apps Locally with Streamlit, FastAPI & Ollama – No Cloud Required!

A Hands-On Masterclass to Kickstart Your AI Engineering Journey Using Python, Open Source Tools, and LLMs on Your Laptop

Facilitated by: Gourav Shah
Led by: Vivian Aranha
Presented by: School of AI


👋 Welcome, AI Navigators!

In this power-packed masterclass, we explored how to build a fully functional AI-powered chatbot from scratch — entirely on your laptop — without any cloud subscriptions or hosted APIs.

As part of the launch of the School of AI, this session was facilitated by Gourav Shah, founder of School of DevOps, and led by Vivian Aranha, veteran IT professional and co-founder of School of AI.

If you're looking to enter the world of AI engineering, this session was designed to give you a no-fluff, fast-start experience, with real code, real tools, and real-world relevance.


🚨 Why This Matters

AI is not just a buzzword — it's the next massive shift in how we build, automate, and interact with technology.

❝ It's not AI that will take your job — it's AI Engineers who will. ❞

Whether you're a working tech professional or an aspiring student, the fastest way to get into the AI wave is to build. This masterclass is your gateway.


🛠️ What We Built – A Local Generative AI App

Using less than 50 lines of Python code, we built a simple but powerful Generative AI chatbot that:

✅ Accepts user input via a Streamlit UI
✅ Sends the prompt to a FastAPI backend
✅ Uses Ollama to run LLMs like DeepSeek R1 or Llama 2 locally
✅ Returns an intelligent response — all without needing internet access

⚡️ Benefits of This Local AI Stack

  • No OpenAI API Key required

  • No cloud costs

  • Runs offline

  • Complete control over your data

  • Great for POCs, startups, and experimentation


🧰 Tech Stack Used

Here’s the open-source stack used to build the project:

Tool Purpose Streamlit Lightweight Python UI for the front-end FastAPI Blazing-fast Python API backend Ollama Run large language models locally DeepSeek / Llama 2 Open-source LLMs used in the project Python & pip Core scripting environment Postman API testing and debugging

All code used in this session will be shared on GitHub. Stay tuned for the repo link!


💡 What You’ll Learn

By the end of this workshop, you’ll understand:

  • How AI apps are structured (front-end, backend, model service)

  • How to run and interact with LLMs locally

  • How to build and test your own AI-powered tools

  • How to prototype and pitch AI startup ideas using simple apps like this

  • How to think like an AI Engineer, even if you’re not training models from scratch


🎓 Who This Is For

  • IT Professionals exploring career growth or transitioning to AI roles

  • Developers and DevOps Engineers curious about GenAI workflows

  • Students preparing for the future of tech

  • Startups looking to prototype AI ideas fast

  • Anyone who wants to build and learn by doing

No prior AI knowledge required. If you know Python basics — you’re good to go.


"I can’t believe this was done in under an hour with under 50 lines of code. Mind blown!"

"So refreshing to see real projects instead of just theory!"


🔥 Final Takeaway: 2025 Is the Year to Get Into AI

The demand for AI engineers and AI-literate developers is exploding — and we’re just getting started.

This isn’t just about LLMs — it’s about building the skills, confidence, and portfolio you need to thrive in the AI-driven future.

Whether you're building apps, automating workflows, or launching your own product — this is your time.


🚀 What’s Next?

💡 Join the School of AI — New courses, challenges, and project-based tracks launching soon
📬 Subscribe to get the GitHub repo + replay link


🎓 Check out School of DevOps & AI Minidegree Programs
🤖 Stay tuned for the next live masterclass!


Follow us on:
🔗 schoolofai.dev
📺 YouTube: Vivian Aranha
🧵 Linkedin: Vivian Aranha on LinkedIn
📧 Subscribe to this newsletter on Substack!


Discussion about this video