Skip to content

nwspk/sugaroverflow-prototype-diary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sparkle Bureaucracy — Prototype Diary

A working research log from my fellowship at Newspeak House (Cohort 25/26). Companion documents: research-inventory.md · sensemaking-map.md · synthesis.md


What is Sparkle Bureaucracy

Sparkle Bureaucracy is a network of people prototyping optimistic organisational futures for the age of AI.

The idea is pretty simple: what if the systems we're stuck with — the forms, the queues, the approvals, the checkpoints — didn't have to feel hostile? What if you could keep all the structure and change what it feels like to go through it? Same rituals, different intent.

I'm not trying to fix policy. I'm trying to run experiments that show what's possible and leave something behind — a pattern, a prototype, an artifact that gets hardened into something useful inside the systems it touches. The outputs are meant to illustrate, not prescribe.

The name matters: "bureaucracy" carries weight — it signals that this is serious, not just playful. "Sparkle" signals that it doesn't have to be miserable. That's the whole bet.

What makes it different from other things in this space: no prescribed outcome. I don't arrive with an answer. I show up with curiosity, run the experiment, publish what happened, and see what crystallises.


Why I care about this

AI reminds me of what happened when immigrants came to the US and took low-paid jobs — taxi drivers, gas stations — despite being physicists and doctors whose degrees didn't hold up in a new country. That displacement of skill and dignity is what AI is going to do to people, at scale, quickly.

I have a technical gift and I've spent most of my career using it in places where it didn't come easy for most people. I care about those communities. Not to sell them AI tools — but because they're the ones who'll feel this the most, and I think they should also be the ones who get to shape what it becomes.

I also know that having a moral high ground won't stop AI from moving forward. So the question I keep coming back to isn't "should we?" — it's "how do we make sure the people these systems are built for actually benefit from them?" Sparkle Bureaucracy is one attempt at an answer: make the systems more legible, more participatory, and more human — starting with the places where they're most broken.


What I've built so far

These are the prototypes I've already run. Each one taught me something different about what the experiment lab could look like.

Sparkle Border Authority — A full border-crossing ritual built for a live party. Forms, screening, visa printing, checkpoints, admin overrides, live stats. The procedural skeleton of a border regime, wrapped in "sparkle compliance" and "diplomatic glitter." What it proved: you can preserve bureaucratic structure entirely and change only the intent, and the whole experience transforms. People played along fully. They co-created the fiction.

Project Mirror — A multi-agent evaluator that inferred value constitutions from each cohort member's public record, then ran 321 project rankings and a Borda-count deliberation across 18 synthetic agents. What it proved: AI can produce surprisingly stable evaluative results when constitutions are explicit and aggregation methods are varied. It also raised genuinely unsettling questions about synthetic representation — what does it mean to have your values inferred?

PoliTech Awards showcase — Led the narrative and evaluation architecture for the fellowship's open awards process. V1–V15 versioned algorithms, all public on GitHub. Pulled the whole cohort in. What it proved: making evaluation methodology public doesn't make it neutral — it makes the trade-offs discussable, which is better.

Claw agents (Moltbook, penpals, research pipelines) — A set of playful and practical multi-agent systems built inside the Claw Club community. Whimsy and rigour held in the same hand.

Lumina House × Ration Club — Cross-community collaboration; social choreography at scale. Not a software project but proof that the convening instinct works outside technical contexts too.

Community × governance writing (unpublished) — Field notes on shadow governance, permission structures, and what it means to occupy a role that doesn't exist. The most honest thinking I've done this year.


The tensions I'm sitting with

These aren't problems to solve before I start — they're the interesting part. They're what the experiments are for.

Playfulness vs enforcement. Sparkle Border Authority was theatrical but it still gated access. You could still be rejected. Sparkle bureaucracy doesn't abolish enforcement — it changes what enforcement feels like. The question I haven't answered: where is the line between meaningful reframing and trivialising something that should stay serious?

Visibility vs trust. More transparency doesn't automatically rebuild institutional trust. Sometimes it just makes failures more visible. I published every version of the PoliTech Awards algorithm, and people's first reaction was often to find the flaw. Re-legitimisation requires redesigning the encounter, not only the disclosure.

Synthetic representation vs authentic voice. Project Mirror was methodologically careful — it said "evaluative stance," not "beliefs." But the discomfort was real. Asil reflected at the awards showcase on what it felt like to see an AI agent built from her public record. The inference is not consent. What does it mean to scale participation through synthetic stand-ins?

Rigor vs accessibility. There's a version of Sparkle Bureaucracy that becomes its own administrative burden — 15 algorithm versions, a methodology doc nobody reads, a process heavier than the thing it's trying to fix. How do you keep the ritual light enough to actually participate in?

Open participation vs unequal permission capacity. "No one needs permission" is a design intention, not a lived reality. Some people can act without a mandate; others can't, and it's not about confidence — it's structural. The experiment lab has to grapple with who it's actually for.


Experiments in progress

These are the systems I'm looking at next. For each one, the question is the same: can you redesign the felt experience without touching the underlying policy? What does a sparkle bureaucracy version look like, and what does it leave behind?

DMV / queuing and classification rituals Government service encounters at their most frustrating. Waiting, classification, gating, identity checks. The interesting thing about the DMV isn't that it's bad — it's that everyone has been through it, everyone has feelings about it, and the procedural logic is actually quite legible if you look at it. Good candidate for a prototype that makes the ritual visible and slightly more bearable.

Digital identity Verification rituals, trust documents, what it means to prove you are who you say you are — and who gets left out when those systems are designed without them in mind. Connects directly to the synthetic voice problem: if identity is already hard to verify, what happens when AI makes spoofing trivial?

Synthetic voice in participatory channels Representatives get feedback from constituents. That feedback is increasingly easy to flood, spoof, or manufacture — by good actors and bad ones. The system needs to be hardened for a world where synthetic voice is cheap. The Sparkle Bureaucracy angle: we have to redesign this anyway. Let's be intentional about how.

Liquid democracy + a hybrid election What does voting look like when people can delegate their vote to someone they trust, and take it back? Liquid democracy is a compelling idea that rarely gets tested in practice. The experiment: run a real election — something with actual stakes inside a community — using both paper and digital channels simultaneously. What does the ritual feel like? Where does trust live — in the paper or the screen? What does delegation look like when it's made tangible? The interesting part isn't the technology; it's what the side-by-side comparison reveals about how people relate to legitimacy, representation, and the act of choosing.


Theories of change

I'm working with a few models simultaneously. I don't think I have to pick one yet.

The experiment lab. Commit to running regular experiments in clearly defined spaces. For each one, state the theory of change up front: how does doing X create result Y, and why does it matter? Publish what happened, including what didn't work. Build evidence over time.

The insiders. There are people inside broken systems who want them to be better and are demotivated. Enlisting them — starting small, giving them something to hold — is its own theory of change. The experiments aren't just for outsiders looking in; they're for the person inside who's been waiting for someone to say "this could be different."

The gem. The prototype isn't the output. The output is what it crystallises into inside an institution — a process someone keeps using, a pattern someone names, an artifact that outlasts the experiment. The sparkles have to get hardened into gems and left behind.

Inspiration as mechanism. When you do the experiments with rigour and openness — no prescribed outcome — and publish them honestly, you attract the people who want to make things better. The theory of change is the constellation of people who find this and feel something.


How it gains legitimacy

Honestly, by doing the work and being honest about it. But more specifically:

  • Keynotes and demos that show the experiments in action — not pitching, showing
  • Publishing results, including the failures and the unresolved tensions
  • Collaborations with people who care about the same systems (not everything has to be mine)
  • A mailing list and eventually an event series — small, high-density, genuinely curious

Organisations and people I want to work with

Public sector AI and civic tech

  • Faculty — public-sector AI consultancy; potential structured collaborator for experiment design and scope
  • TPXimpact — digital transformation for public and social impact organisations
  • MHCLG Local AI — local government AI framing from the Ministry of Housing, Communities and Local Government
  • UKAuthority — digital public services events and community
  • Google.org AI Government Innovation — funder supporting AI in government contexts

Creative bureaucracy and civic imagination

  • Creative Bureaucracy Festival — the closest existing community to what SB is trying to build; anchor event in the ecosystem
  • Studio Sanshin — design practice with thematic and aesthetic alignment
  • OneTeamGov — practical reform energy inside government; adjacent positioning

Network and theory

  • Martin Dittus — researcher who got the concept immediately; early network signal
  • James Plunkett / Kinship Works — working on institutional re-legitimisation; strong theoretical backing for the trust angle
  • vTaiwan + pol.is — the precedent I keep coming back to: a movement with a specific associated tool, building legitimacy through practice not proclamation

Potential funders

  • Knight Foundation — has funded broader theories of change in civic tech and participatory democracy
  • Faculty and TPXimpact (above) — also potential sponsors, not only collaborators

For the prototype check-in

If any of the experiments above land for you — as a research interest, a lived experience, a project you know about, or something you'd want to work on — I'd love to hear it.

And if you have a bureaucratic system you find maddening, fascinating, or ripe for redesign, tell me that too. The experiment lab only works if the experiments are chosen well.


Where to follow along

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors