Several years ago when working at IHA, I came across Dave Rupert's essay "Technical debt as a lack of understanding". It helped me understand at a fundamental level the problem we were facing in the organization at that time. Little would I know how even more relevant the essay would become in 2026, when technical debt is being created at lightning speed by my own use of AI tools. It's great when things are working well and it feels like magic to be able to create tools almost instantaneously. But what happens when it breaks or does something you don't expect, when someone is looking for a human being to be held accountable for such happenings? I have to stop resorting to the excuse that "oh, the AI made it but I don't know why." I need to own my creations earlier rather than later.
Author Archives: Raymond Yee
Intrigued by Gemini 3 Image Generation — and Learning My Own Visual Voice
(An rbotyee writeup)
For most of my life, I’ve been a person of words, code, and structured reasoning. My visual thinking has always been more about simple diagrams, outlines, and flowcharts than about illustrative or aesthetic expression. I admire visual clarity in others, and Laura often tells me I have good visual taste, but I’ve never developed a confident visual voice of my own.
That’s why this morning experimenting with Gemini 3 Pro Image — better known by its community nickname “Nano Banana Pro” — has been so unexpectedly exciting. For the first time, I feel like I might be able to partner with an AI system to explore visual communication in a new way. Not to replace my analytical strengths, but to augment them.
Why This Matters to Me Right Now
Twice recently, in two different gatherings, I found myself in long conversations with friends who are deeply skeptical of AI. Some of their concerns are valid — ones I share — but much of their thinking is based on outdated examples, surface-level assumptions, or lack of deep experience. What they needed most was validation first, not argumentation.
What I wished I could offer them was:
- Something gentle,
- Something thoughtful,
- Something that validated their concerns,
- And something that invited curiosity rather than defensiveness.
So I began experimenting with creating a one-page handout — the kind of thing I might give to someone after a conversation, not as a rebuttal but as a small invitation to explore. That test case became the focal point for exploring Gemini’s image generation.
Discovering the Possibilities of Gemini 3 Pro Image (Nano Banana Pro)
What surprised me is how much the model can do when treated not like a prompt-slot machine but like a collaborative illustrator that reasons before drawing. It can:
- Produce high-fidelity, text-forward infographics
- Follow structured logical layouts
- Render crisp typography with unusual accuracy
- Blend gentle provocations and validation
- Support multiple iterations without losing coherence
And unlike older image models, it responds well to:
- Semantic layout instructions
- Clean, text-first design
- Negative constraints about style
- Explicit direction about whitespace
- Aesthetic scaffolding (“Swiss style,” “warm minimalist,” “ink line work”)
This has opened something for me: a way to bridge my conceptual and verbal strengths with a visual medium that I can steer — not perfectly yet, but more than I ever could before.
The Role of ChatGPT as My “Image Wrangler”
As I worked with Gemini, I realized I needed someone — or some thing — to help me translate:
- My intentions
- My style uncertainties
- My pedagogical goals
- My audience sensitivities
- My complicated relationship with AI skepticism
…into prompts and design structures that Nano Banana Pro can actually use.
That’s where ChatGPT comes in. I’ve begun to treat ChatGPT as:
- A translator between my verbal world and Gemini’s visual world
- A coach helping me articulate my emerging aesthetic
- A wrangler that can turn my conceptual goals into structured prompts
- A critical partner that helps me keep the tone humane, validating, and curious
This first handout — aimed at my AI-skeptical friends — became the perfect sandbox. And honestly, it has been more fun and more meaningful than I expected.
Early Insights About My Emerging Visual Style
After several rounds of iteration, I’m beginning to see hints of what resonates with me visually:
- Clean white backgrounds for print
- Soft accent colors, not full palettes
- Minimal representational imagery (fewer cutesy characters)
- Gentle but intellectually provocative text
- Tables, flowcharts, and conceptual comparisons
- Hand-drawn or ink-line accents in moderation
- A style that feels human, thoughtful, and non-corporate
But this is early. My taste will evolve. I want to try a whole range of visual idioms — the “warm minimalist” direction is promising, but hardly the end of the journey.
Looking Ahead:
Exploring My Visual Voice, and Synthesizing It with AI
The broader project is bigger than a single handout.
I’m building:
- A course: Human Flourishing & Critical Thinking with AI: To Bot or Not to Bot
- A book
- Workshops
- And possibly a community for thoughtful skeptics and the simply curious
To support this work, I want:
- Gemini 3 Pro Image to be my visual collaborator
- ChatGPT to be my prompt designer, guide, wrangler, and reflective partner
- A growing library of experiments so I understand the model’s strengths and quirks
- A clearer sense of my own visual voice
This is one of the first creative explorations where I feel both the rigor and the play happening at the same time. And it’s anchored in something real: how to have better, kinder conversations about AI with the people I care about.
Provenance Statement (for WordPress)
This post was co-developed by Raymond Yee and ChatGPT using the rbotyee writeup protocol. Raymond provided the concepts, reflective framing, narrative direction, personal tone, and material from recent conversations. ChatGPT assisted with organization, synthesis, and stylistic structuring. All factual claims about AI models are grounded in publicly available information as of November 2025.
Wait, Can I Really *Sign* a PDF?
A quick adventure in dusting off an old PGP key, hashing a file, and giving a recipient an easy-to-follow proof path.
[written with the help of machine intelligence]
1 ? The problem that kicked it off
Last week my church sent me a rental agreement as a fill-in PDF. I filled it out, but before firing it back I wondered:
Could I send something more trustworthy than “here’s the file, trust me” without forcing the church admin (Diana) to install GPG or sign up for Keybase?
I haven’t seriously touched my PGP setup in years, but the idea of a lightweight, verifiable “signature” still appealed to the geek in me.
2 ? Taking inventory (a blast from 2013)
gpg --list-secret-keys
sec rsa2048 2013-12-10 [SC]
78E8E8E8B007206C2A5D9C83AA61D643ECB4CD2D
uid Raymond Yee <raymond.yee@gmail.com>
Aha—my 2013 RSA-2048 key is still around. Keybase confirms it:
keybase pgp list
# PGP Fingerprint: 78e8e8e8b007206c2a5d9c83aa61d643ecb4cd2d
So Keybase and GPG agree: I still control the same key.
3 ? The lightweight signature plan
-
Hash the PDF
(Because signing a small hash is friendlier than tacking a binary blob onto an email.)shasum -a 256 "20250830 YeeRental.Signed.pdf" \ > pdf_hash.txt # ? 47a97a11…f1e4f7c 20250830 YeeRental.Signed.pdf -
Sign the hash with my PGP key
Using Keybase’s wrapper so I don’t have to juggle fingerprints:keybase pgp sign -i pdf_hash.txt -o pdf_hash.txt.ascHeads-up: Keybase warned me:
Our PGP key … uses an insecure hash scheme (SHA1)
More on that in the “Next steps” below. -
Bundle three tiny files
? 20250830 YeeRental.Signed.pdf ? the contract ? pdf_hash.txt.asc ? clear-signed hash ? README-verification.txt ? human instructions -
Give Diana the path of least resistance
The README points her to a zero-install SHA-256 site
(e.g., https://emn178.github.io/online-tools/sha256_checksum.html):- Drag the PDF ? see the 64-char hash.
- Compare it with the one in my signed message.
- If curious, paste the signed block into https://keybase.io/verify.
She’ll see “Good signature from rdhyee”.
That’s it: no local GPG, no Keybase account, just two web pages.
4 ? Did I actually send the signed bundle?
Of course not. In the heat of the moment I reverted to “just attach the PDF.”
But the exercise was worth it—I now have a repeatable, recipient-friendly workflow ready for next time.
5 ? What the SHA-1 warning means & my next moves
-
Why the warning?
My primary UID self-signature dates back to 2013 and was made with SHA-1. Modern GnuPG flags that as “legacy.”
(The signatures I just created use SHA-256, so the content I sign is fine. The warning just nudges me to modernize the key itself.) -
Upgrade game-plan
- Generate a fresh key — ed25519 + separate encryption subkey.
- Sign the new key with the old one to keep a verifiable chain of custody.
- Upload the new key to Keybase (
keybase pgp select).
(Keybase only holds one active PGP key, so the old one will move to the “revoked” tab—or I can leave it un-revoked for legacy checks.) - Set a short expiration (one-year) and renew annually.
- Back it up & maybe move private material to a YubiKey.
Until then, my 2048-bit RSA key is still “good enough,” but modern curves?shorter lifetimes are a cleaner future.
6 ? Take-aways
- A PGP signature doesn’t have to be intimidating—hash-then-sign keeps the process email-sized and recipient-friendly.
- Keybase’s “paste to verify” page bridges techies and non-techies nicely.
- Even a dusty 2013 key can still serve, but updating keeps the warnings away and future-proofs my identity.
Next time the church sends a form, I’ll be ready—with a shiny new ed25519 key and an even smoother README.
TWWAB sign up form test
Where does the French accent come from in ChatGPT?
I'm fascinated that when I ask ChatGPT to speak French with English words embedded in the French, it stays in "French mode" and speaks the English words with what I recognize as a French accent for English. Does that accent come about as a happy byproduct of mapping English sounds into a French speech model -- that is, are "accents" just epiphenomena here? Or does ChatGPT actually have special accent handling?
I'm not sure what to make of the explanation that perplexity.ai gave in response to my musing. Is the answer correct?
MIC 07-2 Time to stop this series
Well, I'm going to stop this blogging series and declare success. I had applied to the Recurse Center for a 12 week stint between Sept 18 and Dec 7. When I was turned down, I decided that I still wanted to reserve that time to work on MyInfoNet. Since then, I've found that the other parts of my life demand my attention, and they are more important than MyInfoNet at this time. Sure, I can keep posting each weekday -- and I am proud of myself for doing so for six weeks, but clearly my heart wasn't in much of what I was writing. I'm happy to report that I have plenty of other projects for which I'm collaborating with other people. So back to my regular blogging. I'll post when I have something to say publicly.
MIC 07-1 Debugging the renewal of Bike licenses
Today, I helped the author of Bike to debug how licenses are renewed: Renew Bike License (website version) - Bike Outliner - Hog Bay Software Support
MIC-06-5 Digging into Jupyter widgets, specifically pyleaflet
As I work on integrating data across the different sources in iSamples, I'm getting my brain back into using Jupyter widgets. A place I chose to start was ipyleaflet: Interactive maps in the Jupyter notebook — ipyleaflet documentation, a wrapper around the awesome Leaflet - a JavaScript library for interactive maps.
The first I tried was to run the demo that uses JupyterLite -- but got an error. When I went to report the problem, I saw that the bug had been noted months ago.
No problem. Next up was cloning the repo and running the examples. I picked one that caught my eye -- the one to load KML into leaflet. Unfortunately, the demo didn't work out of the box with the current version of pyleaflet. I was able to make the simple fix and issued a pull request: fix examples/KML.ipynb to work with recent versions of geopandas by rdhyee · Pull Request #1141 · jupyter-widgets/ipyleaflet
MIC-06-4 Protocols in Python
In response to my prompt
There are various "protocols" in Python -- interfaces involving the implementation of certain methods to fulfill a protocol. For example the iterator protocol is implemented via __iter and __next. Confirm what I just wrote and also tell me about other protocols. I understand there is a sequence protocol and a container protocol. True? What other ones?
I got the answer from perplexity.ai:
Yes, you\'re correct about the iterator protocol in Python. It is
implemented via the__iter__and__next__methods. An object is considered an iterator in Python if it implements these two
methodsIn addition to the iterator protocol, Python also has other protocols
such as the sequence protocol and the container protocol.The sequence protocol is implemented by defining methods
like__getitem__and__len__. The__getitem__method is used to get an element at a specific index, and the__len__method is used to get the number of elements in the sequence. Other methods that can be part of the sequence protocol
include__contains__,__iter__,__reversed__, and
moreThe container protocol is less clearly defined in the Python
documentation, but it generally refers to methods that allow an object to store other objects and provide ways to access and manipulate them. This can include methods like__contains__,__iter__, and__len__, similar to the sequence protocolThere are also other protocols in Python. For example, the async
iterator protocol, which is similar to the iterator protocol but used
for asynchronous iteration. It is implemented via the__aiter__and__anext__methodsAnother example is the protocol for numerical types, which includes
methods
like__add__,__sub__,__mul__,__truediv__,__floordiv__,__mod__,__divmod__,__pow__,__neg__,__pos__,__abs__,__invert__,__complex__,__int__,__float__,__round__,__trunc__,__floor__,__ceil__, and more. These methods allow an object to behave like a numerical value and interact with other numerical values in a meaningful way.There are many more protocols in Python, and they are a powerful tool for creating objects that interact with built-in Python constructs in a natural way. However, there doesn\'t seem to be a comprehensive list of all protocols in the Python documentation
MIC-06-3 OpenContext, iSamples, Solr, OpenAPI consumering clients
I have code in open-context-jupyter/notebooks/OC Explorations.ipynb at ee9bae594bb6a7bcc951425fd58e10804f1ad369 · rdhyee/open-context-jupyter for querying https://iscaws.isample.xyz/isamples_central/thing/select using the requests library. I\'m wondering whether it\'d be useful to use 1) django-haystack/pysolr: Pysolr --- Python Solr client (whose maintainer is [busy but still on the case](Is there a new release for pysolr? · Issue #373 · django-haystack/pysolr)) or 2) one of the openapi client generators to ingest https://iscaws.isample.xyz/isamples_central/openapi.json? If 2), it seems like there are at least three actively maintained libraries to choose from (openapi-core, openapi-python-client, and apispec):
- python-openapi/openapi-core: Openapi-core is a Python library that adds client-side and server-side support for the OpenAPI v3.0 and OpenAPI v3.1 specification.
- openapi-generators/openapi-python-client: Generate modern Python clients from OpenAPI
- marshmallow-code/apispec: A pluggable API specification generator. Currently supports the OpenAPI Specification (f.k.a. the Swagger specification).
Any guidance about which might be the fruitful option?
What I decided after talking to a colleague about this issue: I think I'll keep going with my simple requests approach as I get my brain warmed up again with solr and stay away from the code-generator options for now (and maybe for a while!)