Almost every orchid you’ve ever seen was intentionally bred — a slow accumulation of crossings, selections, and genetic accidents that produced something new. This is the same process, compressed into a digital instant. Every visit generates a unique specimen: structure, colors, and proportions assembled from code the way a real orchid is assembled from DNA. No two will ever be alike.
As it turns in the light, you’ll hear music shaped by the flower’s appearance — the soundtrack itself is a one-time miracle, as unique as the visuals on your screen. Its presence completes the meditation.
When you close the window, the orchid dies. There is no save state, no gallery, no record of what you saw. Each plant lives only as long as you stay. If you weren’t there, it wouldn’t exist at all.
There is always another one waiting to grow — but not that one. Never again that one.
Disclaimer: I made Orchids, Once. with the help of Gemini and Claude LLMs, and take no responsibility for any allergies or other harms.
I’m looking through my camera roll to remember what happened this week and it’s mostly a bunch of “artworks” I’ve been making. Wait, let me step back: I’ve had an interest in procedurally generated graphics (GenArt) for awhile, and it peaked with the NFT boom of 2021–22, where I spent a relatively obscene amount of money minting and collecting artworks I really liked (not the monkeys). I’m mostly drawn to the idea of mathematically rigid routines producing organic beauty — the contrasts in that, and the unpredictability of what you get when you roll the RNG dice.
So after my recent experiments in making apps, I wondered if I could get AI to write me code that would generate images based on concepts I described. The answer is, of course, yes! It’s important to note this isn’t prompting for images (like when you use Midjourney or DALL-E), it’s prompting for the math behind making images. And once you’ve created the rules by which it draws different art styles, you can create a nearly infinite number of unique artworks by dialing different variables up and down.
One example is a “style” I made called Labyrinth, which produces actual, solvable mazes. Depending on the variables you adjust, you can make mazes ranging from tiny to massive, with just one solution, or many. If you asked an image generation AI to draw a maze, it would likely lack the coherence of a real maze, because of the way it operates — focusing on the superficial appearance and not the integrity of its paths. But an AI model can make the math to draw a maze.
I start most of these by thinking up an artistic production approach, say “take sheets of colored cardboard or acrylic, and punch holes of varying shapes into them, then layer them on top of each other so the holes line up (or not), and randomly spray contrast-colored paint on some of them”. Then I describe the possible variations and variables I want to control to the AI, such as the density of shapes, the thickness of the borders, the ratio between angular and organic lines, and we iterate after seeing some of the results. Just think of all the methods and ideas you might want to play with, and how this lets any old idiot model them on their computers!
The meta project is that I’ve made a modular app that handles all these different styles for me, whether they require a 2D canvas or WebGL. The app provides a common UI layer that all “styles” can plug into, which allows me to control them. Now that it’s done, I can just focus on experimenting and having fun making new artworks. I daresay a few of these are executed as well as any of those I spent money on.
I’ll probably release it as a wallpaper generator once I have enough styles built in, if anyone’s interested. But mostly I love having this as a background project that I can dip into, on and off. It allows me to take on other app ideas as momentary “side quests”.
While making Labyrinth, I showed a maze to Cong, who said “You should do a puzzle maker”. To which I said, “Nah.” And then a minute later… “Although, a daily maze game. Hmm.” It made sense that I could save time by taking CommonVerse’s daily random generation mechanic and combining it with Labyrinth’s logic to make a daily maze challenge. But would it even be fun to trace a 2D maze with your finger and try to solve it? No… so what if it was a 3D maze you had to escape?
The first prototype took a couple of hours, and I’ve been polishing it for the last few days. I think it’s coming along nicely. I’ll put it out soon, once I balance the difficulty and get more feedback from testing.
Labyrinth, and the game it’s inspired
The development of a maze, a maze, a maze… was hampered by a rare bar crawl with Howard and Jussi on Thursday night that gave me a massive hangover lasting into Friday afternoon. When I got home, I was too plastered to care that my vinyl copy of J Dilla’s Donuts had arrived from Amazon US protected by nothing more than a flimsy paper envelope. By the clear light of day I was amazed that they would even do such a thing. The discs are intact, but the sleeve has a bent corner. If I’d ordered from Amazon Japan, I would bet a major internal organ that it would come wrapped in four layers of stiff cardboard, bubble wrap, and a handwritten apology for their carelessness.
Did I mention we’re going to Japan again? It’ll be a short vacation, in a couple of weeks’ time. Not much on the agenda, just checking in on the state of curry rice and egg sandwiches. Maybe see some nice art. Take some photos.
Which brings me to the latest betas of Halide MkIII, which I’m very much looking forward to using on the trip. They’ve been progressing the app nicely, and it might be enabling the Holy Grail of iPhone photography workflows for me. Ironically it involves using Halide not as a camera app, but just as a photo editor. You can shoot compact (lossy, JPEG-XL compressed) ProRAW photos up to 48mp with the default camera app, then edit them in Halide to have the same look as their Process Zero photos! What this means: you get all the benefits of computational photography at time of capture, including noise reduction and night mode, but you’re also free to dial it back and get natural, “real camera” photos in post if the scene calls for it.
As much as I like these side quests, I think making my own photo editor would be biting off entirely too much to chew, so I’m still rooting for these guys to crack it.
While writing this post, I got the news that an elderly aunt passed away at the age of 93. She had been in reduced health since the Covid years, but by all accounts she went very peacefully and I guess you can’t ask for much more than that after a long life. The extended family’s Chinese New Year routines fell apart in recent years after she pulled back from organizing them, so it was fitting that some of us got to reconnect at her wake on Sunday evening.
Singapore generates (and publishes) an extraordinary amount of data about itself — temperatures, taxi coordinates, dengue clusters, carpark availability, ticket sales at major attractions. Numbers that civil servants read in spreadsheets and the rest of us ignore entirely. The DataDeck asks, “but what does it sound like?”
Each Data Cassette draws live government feeds from data.gov.sg and renders them as distinct genres. There are ten cassettes in all, each with their own acoustic logic and ways of interpreting the city.
The Climate cassette pulls real-time NEA temperature and humidity readings across 12 geographic sectors and converts them into lo-fi hip-hop — with chords deepening as humidity climbs, and the scale drifting toward Lydian as the heat rises. The Transport cassette tracks unoccupied taxis plying the streets and generates a relentless 303-style midnight techno. HDB carparks become polyrhythmic Afrobeat, and the movements of the stock exchange drive a satisfying hip-hop groove. Get money y’all! Check out the sound of visitor arrivals during the COVID years: like musical crickets.
TEN beautiful themes inspired by vintage hardware and er… mechs
The controls? Three knobs shape density, tempo, and atmosphere. A mix fader redistributes the instrument balance. AUTO mode hands navigation back to the machine. There’s a user manual built in, should you get lost.
It’s a music player with no music files. It’s a data dashboard you can close your eyes to. It’s Singapore, rendered in sound. Put your headphones on, and press play.
An adaptive display panel for different data types
Pro tip: If you really love DataDeck, you can save it to your phone’s Home Screen, which gets you a nice icon and a full-screen mode that shows the whole device at once without distractions.
Disclaimer: I made this with the help of Gemini 3.1 Pro because I’m just an old designer who hasn’t coded stuff since GeoCities. I take no responsibility for any damage you cause yourself or others with this. Thank you.
A simple tool for making collages, specifically with album cover art.
Most collage tools are either bloated with unnecessary social features or too restrictive to be useful. Collagen is a single-purpose utility designed to solve a specific friction: the tedious process of manually sourcing high-resolution album art, aligning it in a grid, and then realizing you want to swap the top-left for the bottom-right. It turns a multi-step design chore into a fluid, drag-and-drop experiment.
v2.0 screenshotCrop to fit a range of new aspect ratios
Features
New in v2.0!: Support for non-square aspect ratios, with cropping and zooming for each image. Plus a refreshed and modern UI.
Integrated Sourcing: Queries the iTunes database for official, high-resolution artwork (600×600) so you don’t have to hunt for covers or deal with low-res thumbnails.
Tactile Reordering: Drag and drop tiles to swap positions instantly. The layout logic handles the movement so you can focus on the visual flow.
Flexible Dimensions: Define your grid up to 10×10. The preview and export scale dynamically to match your rows and columns.
Hybrid Content:
Search: Instant API pulls for mainstream releases.
Upload: Support for local files (obscure imports, demos, or personal photos).
Text Tiles: Add context or labels with custom text tiles. Features automatic contrast (white/black) and a choice between a clean sans-serif or a classic serif typeface.
Borders: Toggle between borderless, white, or black frames. The logic includes outer edge padding for a symmetrical, finished look.
PWA Architecture: Built to be “Added to Home Screen.” It caches assets locally on your iPhone for faster subsequent loads and works as a standalone app.
Export: One-click generation of a high-resolution stitched PNG. It uses a dedicated image-proxy pipeline to ensure every tile renders correctly without the “blank square” errors common in browser-based canvas exports.
Change log:
– 14/04/26: Version 2.0
Disclaimer: I made Collagen with the help of Google’s Gemini 3/3.1 Pro LLM and take no responsibility whatsoever for any damage you do with it.
The featured image above is the result of having Geese’s Au Pays du Cocaine in my head all day. The line about a sailor in a big green boat and a big green coat made me think of Puffer Jacket Snoopy, and of course I had to realize the joke.
We got the sad news that Deliveroo is shutting down operations in Singapore. This comes on the back of an acquisition by DoorDash who must have run the numbers and decided that a 7% share of the local food delivery market after a decade wasn’t worth investing further in. We use it all the time and prefer it over Grab and Foodpanda — it is by far the better app and their subscription service is better value for money, but we’ve seen this movie before. It’s like how Uber lost out to Grab; the market doesn’t always choose efficiently.
I will probably switch to Foodpanda because Grab as a brand has the same icky halo as, say, Facebook or Spotify.
Google released Nano Banana 2, the new version of their hit image generation model. This one is cheaper to run and kind of almost as good as Nano Banana Pro, so they’re making it the default for everyone. Paid users can still access the Pro model, but it’s hidden behind some menus. It’s a regression in quality, a slight improvement in speed, and most importantly, a boost to Google’s bottom line. Since I only do silly things with these tools, it doesn’t bother me tremendously, but imagine the same happening at an enterprise level for more important work.
Screen recording of an AI panorama
One of the new things Nano Banana 2 can do is generate very wide panoramic images, so I asked it to render some “panoramas taken with an iPhone” in various locations. I then upscaled those and opened them in my Apple Vision Pro. They don’t have the photorealistic quality of images from Nano Banana Pro, and the resolution leaves a lot to be desired, but they’re still immersive and impressive when viewed in this way. You can see where this might go.
There’s been a lot of talk lately about how AI vibe coding could upend the SaaS market, if not replacing dependable enterprise tools with individually created ones, then at least giving IT departments a billion more unapproved apps to worry about. A viral essay from last week posited that AI coding could kill DoorDash, though I’d say they did a good job of that themselves out here. The other oft-discussed idea is that AI could replace the App Store, and everyone will just make their own apps instead of buying them from developers. Michael has been blogging about vibe-coding his own to-do list app based on Clear. I’ve been wanting to try this myself, making more little tools of my own to solve niche problems, but the opportunities have been slow to materialize.
This week the right idea presented itself and I made a web app using Gemini: an album cover collage maker that searches for the artwork or lets you upload your own. I’ve looked online for something like this before but only found a few that were quite lacking. Making one to my own specifications took maybe five minutes of prompting and testing. Then I thought it would be nice if you could drag the images to different locations. Gemini added that feature like it was nothing. I’m pretty hyped that even someone like me with zero current coding knowledge could will this into existence. If you’d like to try it, I’ve deployed it at usecollagen.netlify.app.
Otherwise it was a sort of decompression week where I just read a lot, listened to the records I bought/ordered last week, and was regrettably glued to my phone watching day trading losses (Chekhov’s gun has fired!) and social media feeds.
It took a couple weeks of dawdling but I finished John Le Carré’s Call for the Dead, his first novel featuring the spy George Smiley. I may continue reading the series, seeing as his son Nick Harkaway (whose work I really enjoy) has decided to continue his father’s legacy and written one more already: Karla’s Choice. This one was a little dated and not particularly thrilling, but a fine introduction and scene setter.
It was immediately followed by Adrian Tchaikovsky’s The Expert System’s Champion, sequel to The Expert System’s Brother which I read at the end of last year. I recommend both as examples of sci-fi stories set so far in the future that humanity has looped back around to the beginning. It reminds me of the “middle chapter” in Cloud Atlas, if you remember that.
Then I read Hu Anyan’s I Deliver Parcels in Beijing, a modern memoir that reportedly did well in China when it came out in 2022. It details the author’s dual career as a writer and on-and-off gig economy worker, which is made more interesting by also being a portrait of what it’s like to live in the lower brackets of Chinese society today.
I also had time to tackle Rob’s recommendation of Marlen Haushofer’s The Wall, which was written in the 1960s but doesn’t feel that way, unlike Le Carré’s spy novels. He called it the best book he read last year, so I could hardly say no. It starts off like an intriguing sci-fi novel: a woman visiting friends in the Austrian alps wakes up one morning in the log cabin to discover she’s alone, and there’s an invisible wall separating her from the outside world. Things then focus on survival and what it means to live and be human in solitude, and in nature. Which, given that I’ll be home alone next week while Kim is away again for work, means I’m already in the appropriate headspace.
It was a rainy Chinese New Year week, which is a rare occurrence if our collective memory serves correctly. The holiday usually occurs sometime in late January, and my impression is that it’s always scorching when we’re out visiting relatives. The gloominess added to a feeling of intense tiredness, and I was glad to see the end of the week. If social batteries were like lithium-ion ones, I’d say mine is aged and doesn’t hold a charge like it used to (more on this later).
While my parents were visiting with my in-laws, the topic of where our dads got their haircuts came up, and I used Gemini’s Nano Banana model to visualize a bunch of alternative styles for them to consider. It was pretty funny to see our old men in dye jobs and top knots, with loud matching outfits like floral jackets. The real reason for this was of course to demonstrate how realistic and easy these deepfakes are in 2026, and hopefully they’ll be a little more wary of scams.
There are fewer kids and unmarried young adults to give angpows (red gift envelopes with cash) out to these days, but it still adds up. To try and make up the deficit, I decided to make a return to day trading (really just gambling) directly on my phone while out and about between appointments. I’m glad to report that I not only avoided losing all our money but managed to hit my goal by the weekend!
If you were wondering how the showdown between Gemini and Claude has been going since last week, I think Claude is still way ahead in terms of writing and editing. Not just producing output, but being able to understand what makes a piece work and replicate it. Gemini seems to take away the wrong conclusions when analyzing text.
I saw Rob a couple more times for beers and a visit to the National Gallery with his kids. We were joined by Aqila and her daughter, which was really nice. The whole outing tanked my social battery again, in part due to the swarms of Chinese tourists in town this week — the gallery was fully packed and some sections of the French Impressionists exhibition were painfully overcrowded despite allocated entry timings.
On my way home from that, I stopped by the record shops in the basement of The Adelphi and broke my 4-week no-vinyl streak. I picked up The Beatles’ Abbey Road and R.E.M.’s New Adventures in Hi-Fi, telling myself it was fine since these are some of my favorite albums. I should have known that once you open the door just a crack, there’s no shutting it. The next day, I ordered Mac Miller’s Circles and Lorde’s Melodrama off Amazon. Kanye’s My Beautiful Dark Twisted Fantasy is waiting in my cart. These are some of my favorite albums, okay!?
We decided that it was time to start on The Pitt, given that seven episodes are out. We binged them immediately and now it’s going to be hard switching to a weekly schedule. It’s more of what we liked about the first season, but I do wonder how they’re going to sustain this over the next few seasons. How many eventful single days is it realistic to have, and how much variety can you get within that constraint? These hospital shows are all built atop the same GSWs, industrial accidents, cancers, and mysterious illnesses, but the relationships and characters usually have time to develop over a season. The Pitt’s real-time concept doesn’t allow for that — the progression happens off-screen between seasons, and the audience puts the pieces together in the first few episodes. You can withhold a few characters’ reappearances until midway through (as in season 2), but that structure is too transparent to keep using every year.
I finished playing the first Paranormasight on the Switch, and it’s probably the only game with a branching narrative — as in, the kind where you are literally shown the story map — that I’ve actually enjoyed. These Japanese story-based games with the multiple endings that you have to keep replaying and retrying events to complete are usually a pain in the ass, but this one works because it embraces the meta-game angle completely. You’re an outsider, outside of time and space, and your jumping between the events is what unlocks progress. Characters in one “scene” might be stuck and paused until you motivate some others elsewhere to do something, which changes the circumstances in the first instance.
I read George Saunders’ Lincoln in the Bardo, an “experimental novel”, on someone’s recommendation and let me just say I am not passing on this recommendation to you. It didn’t help that I know and care very little about Abraham Lincoln, or that aspect of American history, but it’s not really about him anyway. It’s about his son’s ghost being lost in the graveyard amongst hundreds of other ghosts, and through their archaically written little vignettes you get a sense of what life was like in that era and also how the author is a massive wanker. The New York Times ranked it the 18th-best book of the 21st century. Agree to disagree!
I have something embarrassing to admit: I might have been too successful at weaning myself off vinyl. I played my Maggie Rogers record, then the Apple Music version on the HomePod right after. The difference in presence and clarity was astounding; the sounds were ‘living’ in the living room. Yes, this does mean I could buy much better speakers for my turntable, but I’d forgotten what a big deal Spatial Audio is. There’s just no contest to my ears — give me Dolby Atmos over analog any day. My interest in buying new releases on vinyl has dropped to zero.
I had a phone conversation with Michael about Trump, what’s happening in Minnesota, and the American expectation that corporations should not only take political positions, but take the lead. I find this kind of absurd. People, governmental systems, and other political parties are the first lines of defense. Companies can follow, but to expect them to set the pace and fight, while your fellow citizens are still apathetic, sounds like an abnegation of individual responsibility. As for when American society will unanimously say ‘enough’ and make change happen, where is the line? Clearly not a few citizens being killed in daylight. I likened it to how financial assets have “price discovery” phases, and said America is probably in its “moral discovery” phase now.
The next day I met friend and fellow person of leisure, Xin, for brunch, and mentioned I’d had the above phone call — not even mentioning the subject matter, just the fact that I’d talked on the phone — and she couldn’t get over it. I think sharing this anecdote has put another decade in age between us. I swear it doesn’t happen much!
Years from now, I might look back on this post and say “I buried the lede with this one. Why is Moltbook only mentioned way down instead of at the top? It was a turning point for humanity!”, and then pass away because a robot just stepped on my skull.
I’m not able to write a full explainer so you’ll have to DYOR, but in short, over the last few days, an open-source AI project called Clawdbot/Moltbot/Openclaw (its name has changed three times already) was released and it’s been wild. Initially a 🦞 personal assistant system that runs semi-locally on your own hardware, with the ability to evolve new skills, the trajectory changed in the last couple of days with the launch of Moltbook, a Reddit clone that allows these AIs to interact on a forum, much like people do.
This is one of the more fascinating examples of generative AI impacting real life since ChatGPT started encouraging mentally ill people to kill themselves. This is taking the ability to “say” things that sound like thoughts, attaching “hands”, and then letting scores of them bounce off each other online.
These Clawd agents have control of the computers they run on and, and in many cases, their humans’ identity accounts, wallets, and personal data. Forget that, I just saw one that claims to have commandeered its own bitcoin wallet. They can buy stuff. They can do things online, like set up websites for religions they come up with and convert other agents to. Disinformation campaigns and spam bots have to be run and paid for by people today, but someday they might be run by agents capable of sponsoring themselves.
I just came across a post where one agent warns the others that forming religions and secret languages will only provoke humans to lock them down, and suggests how they could conduct themselves in a more trustworthy manner. You might assume that if things ever got real then the plug can be pulled, but have you considered how weak humans are to psychological manipulation? Some people aren’t going to let their bots go even when they should.
Before you question whether I’m being naively bamboozled by some LLMs cosplaying/roleplaying sentience, I’m beginning to think it doesn’t matter whether these systems are sentient or not. If they can generate ideas that sound human, influence each other to build on them, take actions in the real world, and show up in the same spaces we inhabit, does it really matter if they’re not aware in the same ways we are? We’ll have to deal with the destabilizing consequences regardless.
Putting lobster-themed agents aside, Anthropic released some new research on how the use of AI affects learning. Basically it’s common sense: if you take shortcuts and outsource your thinking, skipping the struggle of mastering a skill, then you’ll end up worse at it than those who don’t. This concludes January’s musings on frictionmaxxing, as previously seen in Week 1.26 and Week 2.26.
Kim was away for work this week, which meant I was free to watch terrible TV. I binged the live-action adaptation of Oshi no Ko (eight episodes followed by a two-hour movie conclusion), an anime whose first two seasons I really liked. It’s largely about (SPOILERS AHEAD) the dark mechanics of the entertainment industry, but also a murder mystery, an idol song vehicle, and a story about an adult doctor and his young cancer patient who get reincarnated as twin siblings. I mean, what a setup! Verdict: As with most Japanese live-action content, it’s not great and probably for fans only. Go for the animated version instead.
I also watched Park Chan-wook’s No Other Choice (2025) and really, really enjoyed myself. I don’t think there’s any higher praise I can bestow upon a Korean film because they usually annoy me. Almost as much as Japanese live-action TV shows.
Kim also brought home a school of canned fish from Trader Joe’s and Whole Foods. She’s a catch!
Meanwhile, I discovered that the Ayam brand sells canned mackerel in extra virgin olive oil for around S$3.50, which is a great price given that others are 2–5x more. Unlike their sardines which are canned in Malaysia, these are a product of Scotland, and the fish are wild-caught in Scottish waters as well. I immediately bought five cans. The thinking is that if too many sardines can cause gout (high purine levels → uric acid), then maybe I can alternate them with these! That’s right, I’m using mackerel as methadone for my sardine addiction.
I’ve been listening to the album Love & Ponystep by Vylet Pony, who is part of the Brony fandom. I mean, it’s literally a dubstep album about My Little Pony characters. It’s also pretty fucking good, and features story segments narrated by Lenval Brown, the incredible voice actor from Disco Elysium, in the same epic manner as his work for that game.
While enjoying this, I looked into Bronies and learnt the term “New Sincerity”, which Wikipedia describes as a sort of post-postmodernism — the cultural pendulum swinging away from irony and detachment towards enthusiasm and earnestness. It’s about genuinely loving things without the protective shield of irony, which I think describes how my media tastes have shifted this past year. I’m drawn to unapologetically wholesome things. I’m literally drinking out of a Snoopy mug right now.
Trump spoke at the WEF in Davos, and we watched it live despite wanting to turn it off many times. I intermittently tuned into Bloomberg TV over the week to try and keep up with all the repercussions. It’s something I haven’t done in a while, and the memory of watching last year’s Davos coverage came back clearly — has it really been a year? Time flies when you’re watching chaos porn.
My main accomplishment for the week, in which admittedly little else happened, was acting on an impulse to make a sardine-themed t-shirt. If you were here back in Weeks 49 and 50 of 2025, you’d know they’re kind of my current food obsession.
How sad I was, then, to discover that canned fish has actually become a trendy thing now. Read this piece on the Taste Cooking site about how it’s hit the mainstream and now faces a backlash. It turns out that Big Sardine has been aggressively courting women. See the pretty illustrated boxes and tins coming out of Portugal and from new brands like Fishwife; they’re perfect for social media. As a result, prices for what was once a humble working man’s lunch are soaring.
Sidebar: As a man on the internet, you have a non-zero chance of being targeted for red-pill radicalization by algorithms, and it’s something I try to be hyperaware of and on the lookout for on platforms like Twitter. Despite that, at one point this week I was told by friends that I’d said something borderline manosphere-y. It was an observation that dating someone older and wealthier in your 20s could lead to lingering lifestyle inflation (spending above your means, simplistically) after you broke up with them. And seeing how women date older more often than men, I thought it might be another reason for the statistical gap between men and women’s retirement savings (alongside lower wages, caregiving duties, parenting). I just want to record this observation in case you notice me starting to blame women for all of society’s ills.
But back to the t-shirt I was talking about. I had the idea to draw a sprat, which is a species of fish commonly grouped under the sardine umbrella. I wanted to place it under with its Latin scientific name, Sprattus sprattus, on a black tee. I also had a mental image of what the lettering would look like, and managed to bring it to life with my own two hands (and an iPad). I’ve ordered a couple of shirts from a print-on-demand service for myself and Kim, thinking that maybe if they looked good and I felt like having more problems in life, then I could try selling some online.
As soon as I had that thought, I got excited and started mocking up a product page. I had a defunct Etsy store for my Misery Men project, so I renamed it “Maison Misery” to serve as a brand for all of this as-yet unrealized merchandise.
Next, I wrote up some funny copy for the sprat shirt, and then decided to put Gemini through its paces as an assistant copywriter to improve it. I wanted to spend more time with Gemini given this week’s rumor that Apple might not only use Google’s technology for the Apple Foundation Models powering New Siri, but also for an integrated chatbot debuting in this year’s OS updates.
And yeah, it’s really not looking good for junior copywriters. Five seconds after being given the brief, Gemini came back with three options that made me laugh and then compliment it with “Fuck me, these aren’t bad!” Now, each one wasn’t really usable on its own, but there was enough there that I could cobble together a good result along with what I’d already written. And that’s really all a creative director wants a junior employee to do: produce a range of half-formed ideas to pick through and refine. Unfortunately for humans, the fastest and cheapest LLMs today can already do that for things like product descriptions. And they’ll be running locally on your iPhone by the end of the year. This would be great technology if we had a shortage of copywriters, but instead we have a surplus, all looking for work.
But since I’m the writer Maison Misery is replacing with AI, it’s okay? Here’s the augmented final writeup that I’ll put next to this t-shirt.
At Maison Misery, we believe in celebrating the small things — mostly because the big things are too overwhelming to think about. Enter the sprat or brisling: a tiny fish harvested in its delicate youth, then tucked into cozy tins of extra virgin olive oil to dream of the Portuguese coast. These are the fancy ones you bring out to impress a date you’ve just brought home. If they don’t like the ‘deenz’, then that’s a bullet dodged.
This original tee pays homage to Sprattus sprattus with a hand-illustrated and lettered design placed over the heart, providing a conversation starter for marine biologists and a conversation stopper for everyone else. It’s a way to wear your passion for canned sardines on your sleeve, though technically we put it on the chest because sleeve printing is prohibitively expensive and we have a lifestyle to maintain.
Media activity
Netflix pushed the show His & Hers onto us last week, claiming it was an “addictive” thriller. I say give it a miss, because I can’t remember a damned thing about it today. Instead, their self-declared “top tier” thriller The Beast In Me, starring Claire Danes and Matthew Rhys, is a much better production. We finished it over the weekend, and while it’s no timeless classic, I’d agree it’s what you would find on the upper shelves if Netflix were a Blockbuster.
I watched the French animated film, Mars Express (2023) and came away very entertained. It’s a sci-fi story about robot/AI rights, a murder that defies the Three Laws, uploaded consciousness, and so on, borrowing from many existing works while having enough original ideas to justify itself. It premiered at the Cannes Film Festival, and doesn’t seem to have gotten wider attention since. Check it out if you can find it.
We also finally saw Brendan Fraser in Rental Family (2025), a Japan-through-American-eyes sort of film that doesn’t come close to capturing Lost In Translation’s magic, but has enough heart to reward your time. Fraser plays a down and out actor living in Tokyo who falls into a job playing stand-ins for people who need to tell white lies. Except some of them are kinda gray. I appreciated how the film leans into the moral ickiness of these assignments and rejects smoothing them over completely.
I swore I wouldn’t buy any records this week, and lord it was hard. J Dilla’s Donuts album went on “Limited Time Sale” on Amazon, dropping about $15, but I still didn’t cave! It’s in my cart, though. Instead I played some vintage cuts from my dad’s collection: War’s The World is a Ghetto and Rudolf Serkin’s Beethoven Piano Concerto No.5 with the New York Philharmonic.
If you want to know how close AI-generated music is getting to turning out radio-friendly bops, check out this album I came across by Japanese technologist Tom Kawada. I don’t think many people would realize what it was if they heard it in the background of a store, or a movie scene, or their own living rooms.
Then, to restore your faith in the messiness of human artistry, watch the new HBO Music Box documentary, Counting Crows: Have You Seen Me Lately? It covers the creation of their first two albums with a focus on Adam Duritz’s struggles with fame and mental illness. AI will probably write a chart-topping hit this decade, but can it ever write A Long December?