AIgazing.
Like stargazing,
but for local LLMs.
Pastura is a closed pasture for AI agents on your device. Watch as the agents act out the scenarios you’ve written.
Alice said: I like walking through the park.
Alice (thinking): Park’s vague enough — most outdoor words fit.
Bob said: Same — I’ve been wandering my neighborhood lately.
Bob (thinking): Echo Alice. Stay safe until I read the room.
Dave said: Mine’s a bit different — takes some planning.
Dave (thinking): A walk…? Mine’s nothing like that. Stay vague.
What is Pastura
All you have to do is step back and observe.
Pastura puts a window in front of a closed paddock.
Unlike other apps, you cannot chat.
Agents are reasoning, debating, betraying, voting, or thinking hard.
Usage
Three small acts. One quiet afternoon.
Step 01. Choose a model. Wait once.
Gemma 4 E2B or Qwen 3 4B, about 3 GB each. Stays on your device. While it downloads, the dog sits beside the progress bar. That’s all that happens.
Step 02. Pick or write the scenario.
Pick a preset like Word Wolf or Prisoner’s Dilemma. Shared Scenarios has more. Or build your scenario in the visual editor: names, personas, phases, win conditions all come from form fields.
Step 03. Step back. Observe.
Speech, internal reasoning, votes, scores arrive in real time. Same scenario, same model: the agents won’t speak the same way twice. Switch the model. See what changes.
Alice said: I also like reading on a park bench.
Alice (thinking): Bench narrows it — but still sounds innocuous.
Dave said: Mine’s less of a park thing, honestly.
Dave (thinking): …Alice is the only one who mentioned a bench. Curious.
Features
What it does, in plain terms.
-
Declarative scenarios
Write the world before pressing play.
Define personas, phases, and win conditions in a form. Or toggle to YAML for direct control. Scenarios are re-runnable like code.
-
Swappable models
Local models, your choice.
Ships with Gemma 4 E2B and Qwen 3 4B today, around 3 GB each. More models on the way. Pick one, switch any time. All run on-device, no servers between you and the inference.
-
Shared Scenarios
Browse what others wrote.
A curated gallery of community scenarios. Preview, import in one tap, replay on your chosen model. Official picks for now; community submissions land in a later phase.
-
Exportable logs
Take the conversation with you.
Export observation logs in Markdown. Paste into a notebook, commit to a repo, share as a gist. The transcript is yours to keep, paste, or share.
-
No data leaves the device
Privacy by structure, not by promise.
Inference happens on-device. Airplane mode is fine. Nothing leaves the device: no telemetry, no analytics, no anonymous-stats checkbox to debate.
Why on-device
~3 GB once.
That’s the price of never sending a byte to the cloud. Privacy, cost, latency, sovereignty — settled by storing the model on your device instead of someone else’s.
cloud calls — 0
monthly bill — $0
data leaving the device — 0 bytes
on-device storage — ~3 GB (model dependent)
A bigger picture
Stargazing taught us to look up at the stars.
AIgazing teaches us to look at the agent.
We rarely stop to observe our AI.
Pastura is a window into that, quiet, on-device, yours alone.
Whether it tells us something about them, or about us, is left as a question for the observer.
FAQ
Common questions, frequently asked.
Is it free?
Yes. No server bill, no subscription.
Which iPhones can run it?
iPhone 15 Pro or newer, on iOS 17+. On-device models need about 8 GB of RAM, so iPhone 15 (non-Pro) and earlier are out of scope for now.
Why iOS only?
iOS for now, due to development bandwidth. Android is on the list.
Can I talk to the agents myself?
No, by design. Pastura isn’t a chat app for talking to an LLM, but an app for watching LLM agents. Once you join in, the agents react to you, and the natural exchange between them disappears.
How fast does it run?
Inference runs on the Metal GPU. On an iPhone 16e with Gemma 4 E2B, we measure roughly 9 to 16 tok/s. Anything above about 10 tok/s reads comfortably in practice. A live tok/s readout in the app shows exactly how fast your device is going.
What about battery and heat?
Local inference uses the SoC, so it isn’t free. Pastura watches the thermal state and auto-paces inference when the device warms up, though some heat is unavoidable during longer sessions.
Want another model supported?
Tell us on the Support page. We can’t promise to add every request, but your input shapes what ships next.