#CellStrat AIBytes — AI Meetup on Graph Prompting, Voyager Agent, Mistral LLM
#CellStratAILab #disrupt4.0 #GenAI #LLM #ImagineView #KBMiner #CellBot #Health
PRESENTING IN-PERSON AI LAB MEETUP AND CUTTING-EDGE WEBINARS ON GENERATIVE AI THIS WEEKEND 16–17 MAR 2024 :-
(1) Topic : CellStrat AI Lab IN-PERSON Meetup — Innovations in Generative AI
Location : WeWork Vaishnavi Signature, ORR, Bellandur, Bengaluru KA
Date : Saturday 16 Mar 2024, 2–6 PM IST
RSVP here to attend
Agenda :-
2:00–4:00 PM IST — Generative AI — An Introduction to Graph Prompting
Presenter : Dolcy Dhar, Data Scientist, CellStrat AI Lab
4:00–5:00 PM IST — Innovations in Generative AI
Presenter : Prabhash Thakur, AI Director
5:00–6:00 PM IST — Open Networking
(2) Topic : Gen AI — VOYAGER: An Open-Ended Embodied Agent with Large Language Models
Date : Saturday 16 Mar 2024, 9:30 AM IST / Friday 9 PM Pacific
Presenter : Dr Ramasubramaniam, AI Researcher
RSVP here to attend
Intro : VOYAGER is the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention.
(3) Topic : Gen AI- AWS Bedrock’s Knowledge Base & Agents: Simplifying RAG & Task Execution
Date : Sunday 17 Mar 2024, 9:30 AM IST / Saturday 9 PM Pacific
Presenter : Bismillah Kani, Sr AI Scientist
RSVP here to attend
Intro : Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon via a single API, along with a broad set of capabilities one needs to build generative AI applications with security, privacy, and responsible AI.
(4) Topic : Generative AI with Mistral LLM
Date : Sunday 17 Mar 2024, 2 PM IST
Presenter : Poulami Sarkar, AI Researcher
RSVP here to attend
Intro : Mistral 7B is a powerful LLM by Mistral AI. It is a decoder-only Transformer with the following architectural choices:
* Sliding Window Attention — Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens
* GQA (Grouped Query Attention) — allowing faster inference and lower cache size.
* Byte-fallback BPE tokenizer — ensures that characters are never mapped to out of vocabulary tokens.
Attend these Webinars to learn cutting-edge Gen AI topics !
ABOUT CELLSTRAT’S IMAGINEVIEW :-
CellStrat’s ImagineView is a world-class LLM Business Applications platform allowing sophisticated data mining and insight development from corporate literature. Try it out here : https://www.imagineview.com/
Sign up for a Free Trial to ImagineView — https://imagineview.com/signup
Questions or Comments ? We love to hear from you. Just email contact@cellstrat.com with your query!
Happy Knowledge Mining to you!
PS : Do sign up to receive Email Updates about ImagineView — http://eepurl.com/iG0rRQ