Llama 4: Education Focus
It's here, it's semi-open, it's powerful!
First look at official-through-Meta Llama4 models, which officially announced yesterday.
Voice First
The Meta app is voice-first, which surprised me as I was putting my kids down to bed, but kinda cool. I’ve enjoyed talking to ChatGPT a bit as a brainstorming tool - feels more natural than typing to it - and it was interesting that the Meta AI app asked to be voice-first from the start. It reminded me when Blackberries were all the rage with slide out keyboards etc and iPhone was like nope! Digital keyboard… It’s that big of a shift. AI’s that talk and do away with awkwardly slow conversation interfaces for Gen Z.
Fine-Tune in the Cloud, Run Locally?
The most exciting thing from the perspective of this blog was the touted ability to fine-tune a model on Meta’s servers in the cloud, then download and run that model locally. It looks like it will be in Vllm as opposed to Ollama for now, but I’m sure that will come!
https://llama.developer.meta.com/docs/features/fine-tuning/?team_id=532885636347635
(You better believe I requested access to this feature!)
For Education, I see this as HUGE but I’ll have to write more about that later.
Language is better
I’m learning German through Duolingo to prepare for a trip there this coming December, and a fun feature is multilingual support. ChatGPT has this feature too so I’ll try something in spanish and ask my wife which is better.
OpenAI:
Coding
I tried running the last challenge I had and here was the result - it worked! (Updated GitHub Repo too!)
The Bigger Picture
I love the work the Foundation Model companies are doing (Anthropic, OpenAI, Google). But, if Meta is truly committed to releasing the models so they can be tuned, and focus on developer tools for vertical industries like education/healthcare and hobbyists, they’ll have a real opportunity to lead the way.





