Tag: Photo Editing

  • Week 15.26

    Week 15.26

    I’m looking through my camera roll to remember what happened this week and it’s mostly a bunch of “artworks” I’ve been making. Wait, let me step back: I’ve had an interest in procedurally generated graphics (GenArt) for awhile, and it peaked with the NFT boom of 2021–22, where I spent a relatively obscene amount of money minting and collecting artworks I really liked (not the monkeys). I’m mostly drawn to the idea of mathematically rigid routines producing organic beauty — the contrasts in that, and the unpredictability of what you get when you roll the RNG dice.

    So after my recent experiments in making apps, I wondered if I could get AI to write me code that would generate images based on concepts I described. The answer is, of course, yes! It’s important to note this isn’t prompting for images (like when you use Midjourney or DALL-E), it’s prompting for the math behind making images. And once you’ve created the rules by which it draws different art styles, you can create a nearly infinite number of unique artworks by dialing different variables up and down.

    One example is a “style” I made called Labyrinth, which produces actual, solvable mazes. Depending on the variables you adjust, you can make mazes ranging from tiny to massive, with just one solution, or many. If you asked an image generation AI to draw a maze, it would likely lack the coherence of a real maze, because of the way it operates — focusing on the superficial appearance and not the integrity of its paths. But an AI model can make the math to draw a maze.

    I start most of these by thinking up an artistic production approach, say “take sheets of colored cardboard or acrylic, and punch holes of varying shapes into them, then layer them on top of each other so the holes line up (or not), and randomly spray contrast-colored paint on some of them”. Then I describe the possible variations and variables I want to control to the AI, such as the density of shapes, the thickness of the borders, the ratio between angular and organic lines, and we iterate after seeing some of the results. Just think of all the methods and ideas you might want to play with, and how this lets any old idiot model them on their computers!

    The meta project is that I’ve made a modular app that handles all these different styles for me, whether they require a 2D canvas or WebGL. The app provides a common UI layer that all “styles” can plug into, which allows me to control them. Now that it’s done, I can just focus on experimenting and having fun making new artworks. I daresay a few of these are executed as well as any of those I spent money on.

    I’ll probably release it as a wallpaper generator once I have enough styles built in, if anyone’s interested. But mostly I love having this as a background project that I can dip into, on and off. It allows me to take on other app ideas as momentary “side quests”.

    While making Labyrinth, I showed a maze to Cong, who said “You should do a puzzle maker”. To which I said, “Nah.” And then a minute later… “Although, a daily maze game. Hmm.” It made sense that I could save time by taking CommonVerse’s daily random generation mechanic and combining it with Labyrinth’s logic to make a daily maze challenge. But would it even be fun to trace a 2D maze with your finger and try to solve it? No… so what if it was a 3D maze you had to escape?

    The first prototype took a couple of hours, and I’ve been polishing it for the last few days. I think it’s coming along nicely. I’ll put it out soon, once I balance the difficulty and get more feedback from testing.

    The development of a maze, a maze, a maze… was hampered by a rare bar crawl with Howard and Jussi on Thursday night that gave me a massive hangover lasting into Friday afternoon. When I got home, I was too plastered to care that my vinyl copy of J Dilla’s Donuts had arrived from Amazon US protected by nothing more than a flimsy paper envelope. By the clear light of day I was amazed that they would even do such a thing. The discs are intact, but the sleeve has a bent corner. If I’d ordered from Amazon Japan, I would bet a major internal organ that it would come wrapped in four layers of stiff cardboard, bubble wrap, and a handwritten apology for their carelessness.

    Did I mention we’re going to Japan again? It’ll be a short vacation, in a couple of weeks’ time. Not much on the agenda, just checking in on the state of curry rice and egg sandwiches. Maybe see some nice art. Take some photos.

    Which brings me to the latest betas of Halide MkIII, which I’m very much looking forward to using on the trip. They’ve been progressing the app nicely, and it might be enabling the Holy Grail of iPhone photography workflows for me. Ironically it involves using Halide not as a camera app, but just as a photo editor. You can shoot compact (lossy, JPEG-XL compressed) ProRAW photos up to 48mp with the default camera app, then edit them in Halide to have the same look as their Process Zero photos! What this means: you get all the benefits of computational photography at time of capture, including noise reduction and night mode, but you’re also free to dial it back and get natural, “real camera” photos in post if the scene calls for it.

    As much as I like these side quests, I think making my own photo editor would be biting off entirely too much to chew, so I’m still rooting for these guys to crack it.

    While writing this post, I got the news that an elderly aunt passed away at the age of 93. She had been in reduced health since the Covid years, but by all accounts she went very peacefully and I guess you can’t ask for much more than that after a long life. The extended family’s Chinese New Year routines fell apart in recent years after she pulled back from organizing them, so it was fitting that some of us got to reconnect at her wake on Sunday evening.

    See you next week.

  • Week 14.26

    Week 14.26

    An update on my app addiction

    On Wednesday morning I woke up and saw that my last app DataDeck was getting a bunch of likes and reposts on Bluesky, which was a nice surprise. If ever there was a place where people would appreciate a wacky, nerdy idea, I guess that would be it.

    My Instagram Story on Wednesday

    I made a couple of post-release updates to my magnetic poetry non-game, CommonVerse. There are now two new themes, one called Label Maker that resembles those little Dymo stickers we used to make, and another called Zine which is like a random note of cutout words. The UX has also been improved in subtle ways that might make it easier to manage making sentences.

    My “main” app project now is one that I can keep noodling on in the background, with no real endpoint — it’s done when I think it’s done — and the idea was that would help me slow down and spend less time with this vibe coding stuff. Guess what happened? That’s right, if you design something that can sit on the back burner, it will sit on the back burner. I started work on another app instead.

    Defying time and gravity

    I’ve known that the next step was to play with agentic coding tools like Codex or Google’s Antigravity. These are code editors with integrated AI that can look across all your project files and manage multiple agents working on simultaneous tasks. It’s a far cry from the way I’d been working: getting advice and instructions from a single chat, and then doing everything myself in a code editor. So I finally got started with Antigravity, and it blew my mind.

    The productivity increase is hard to describe. I could just describe stuff and it would get done without further work on my part. The tool can use the system’s terminal and Chrome browser to install packages, click around and test the app, figure out why things aren’t working, and fix it while you watch. Stuff that took me days over the last month could have been done in hours. It was automating so much of what little I, the non-programming human, was doing and considered my job, that it made me feel kinda redundant, to say nothing of real programmers.

    With Antigravity, the MVP of my app concept was done in three hours on a Friday. The good/bad news was that it blew through most of my token allocation for the week. So I went back to the “old” way of working and made subsequent changes manually. What I discovered was that I much prefer getting hands on with the project files, looking through the code to understand what was going on and what went where. I think I’ll use these agentic tools to get started fast and figure out a working architecture. After that, it’s more fun to get involved and make improvements slowly.

    Ate and left the Crumbs

    So the new app is called Crumbs, as in breadcrumbs, as in leaving a trail of them so you know where you’ve been. It’s a private location journal that lets you mark where you are on a map with a single button push. Over time, you can see the path of your journey(s).

    I made this because I’ve always wanted something like this for logging holidays, and no app really does what I want. Foursquare’s Swarm is based on Places, so you have to find the business listing or entry in order to check in. If you’re in the middle of a national park, or in a country where no one has created Places, or you can’t read the names, you’re out of luck. Google Maps has a Timeline, but it tracks your location all the time, and it only shows your trail on a day-by-day basis. Your data is also locked in their app and you can’t get it out to visualize in other ways.

    Crumbs is private, and you can take the data out in JSON format. It logs the time and weather along with your location, and you can write little notes. You can save an image of your map, or export a PDF of your journal.

    A big breakthrough (for me)

    Unfortunately, because it’s a web app and not a native iOS app, it can’t permanently store data on your device. The OS may decide to purge all your data if you haven’t used it in a week. That’s a dealbreaker for any app intended to be a life-logging tool. That really bummed me out, and I thought it would just have to be a personal tool that I couldn’t distribute to anyone else — since remembering to do manual backups/restores of the JSON file would be a massive PITA for any user.

    And then I had a Eureka moment! I thought of a possible solution and asked Gemini if it was feasible, to which it answered “Yes, this is an ideal solution”. I wanted to scream “Well, then why didn’t you suggest it all this time we’ve been discussing how to get around the problem!?”

    The answer was Dropbox integration. I can’t make a web app read/write files locally, but I can do it in the cloud. So now Crumbs is as useful as a “real app”, provided you connect a Dropbox account.

    As of Monday morning this post is late and I think Crumbs is ready, so here it is.


    Other thoughts

    • Here’s a free idea: I was inspired by this stamp journal that went semi-viral, and wanted to make some sort of digital Instax photo album. It’d be kinda nice to keep a virtual scrapbook of interesting images, right? Well, turns out you can just use Apple’s Freeform app and Dazz Cam. It’s as simple as making a board and dropping in images, then arranging them however you want. All stored locally and synced to iCloud, easy peasy. Just because you can vibe code it doesn’t mean you should.
    • My iPhone’s MOFT Snap Case developed a cut/tear in its faux leather surface, and so had to be replaced after just six months. Its replacement is a Caudabe Sheath, which fits my requirements of being neither silicone nor slippery, with full edge coverage and a Camera Control passthrough button. It’s a hard plastic material with a rough, pebbled texture that makes it feel secure when held. It also came in second in MobileReviewsEh’s roundup of the year’s best cases. I got the version with the ‘open’ cutout for the 17 Pro Max’s camera island, not the ‘precise’ covered design.
    • Kim managed to finish reading Project Hail Mary and we went to see the film on Sunday (non-IMAX). Apparently there’s a longer cut, nearly four hours, which will be released on streaming in August when it comes to Amazon Prime Video. Yes, this is billed as an Amazon original film from the very first frame, coming even before the MGM logo (which they own), and I don’t think that will ever stop being weird. The film is good, a mostly faithful adaptation of a fun but slightly flawed book. I just think they glossed over a lot of detail in the final act, which lowered the stakes and made it less exciting and rewarding than it could have been. Hopefully the extended cut’s extra run time is concentrated at the end.
  • Collagen

    Collagen

    Use Collagen at usecollagen.netlify.app


    A simple tool for making collages, specifically with album cover art.

    Most collage tools are either bloated with unnecessary social features or too restrictive to be useful. Collagen is a single-purpose utility designed to solve a specific friction: the tedious process of manually sourcing high-resolution album art, aligning it in a grid, and then realizing you want to swap the top-left for the bottom-right. It turns a multi-step design chore into a fluid, drag-and-drop experiment.

    v2.0 screenshot
    Crop to fit a range of new aspect ratios

    Features

    • Integrated Sourcing: Queries the iTunes database for official, high-resolution artwork (600×600) so you don’t have to hunt for covers or deal with low-res thumbnails.
    • Tactile Reordering: Drag and drop tiles to swap positions instantly. The layout logic handles the movement so you can focus on the visual flow.
    • Flexible Dimensions: Define your grid up to 10×10. The preview and export scale dynamically to match your rows and columns.
    • Hybrid Content:
      • Search: Instant API pulls for mainstream releases.
      • Upload: Support for local files (obscure imports, demos, or personal photos).
      • Text Tiles: Add context or labels with custom text tiles. Features automatic contrast (white/black) and a choice between a clean sans-serif or a classic serif typeface.
    • Borders: Toggle between borderless, white, or black frames. The logic includes outer edge padding for a symmetrical, finished look.
    • PWA Architecture: Built to be “Added to Home Screen.” It caches assets locally on your iPhone for faster subsequent loads and works as a standalone app.
    • Export: One-click generation of a high-resolution stitched PNG. It uses a dedicated image-proxy pipeline to ensure every tile renders correctly without the “blank square” errors common in browser-based canvas exports.

    Change log:

    – 14/04/26: Version 2.0


    Disclaimer: I made Collagen with the help of Google’s Gemini 3/3.1 Pro LLM and take no responsibility whatsoever for any damage you do with it.

    Related blog post: Week 9.26

  • Week 51.25

    Week 51.25

    I shot these photos on an iPhone 17 Pro Max and emulated three classic Chinese B&W film stocks with AgBr: Lucky SHD 100, Friendship 100 Pan Film, and Shanghai GP3 100. The idea was to get the look of road trip snapshots from the 1990s that a traveler then might have taken.

    11 greatly biased observations from a first trip to China

    • The Great Firewall does indeed block the majority of household internet names in the west. Imagine testing if you’re online, what would you type in the address bar of your browser? Google? Nope. Any Facebook property? All social networks and chat platforms don’t work, with the exception of iMessage. However, this only applies to hotel WiFi networks and those provided by local ISPs. If you’re roaming on a cell network while using a foreign provider’s SIM, things work as expected (albeit routed through Chinese servers). I decided not to bother with VPNs and just trusted in HTTPS 😬
    • Powerbank rental machines are ubiquitous, even in places where you should never leave a box full of lithium-ion batteries, like out on the street in direct sunlight. You pay a few cents per hour (via QR code), and because they’ve landed on a common battery design between the many operating brands, it seems you can return one anywhere else after you’re done charging your devices. It’s great not having to carry your own around, but even given a high degree of civic integrity, I think getting adoption in a country where everyone already has their own (like Singapore today) would be tough.
    (more…)
  • Week 48.25

    Week 48.25

    My personal MUBI Shaolin film fest went on as planned, and I managed to watch a few more before they left the service. Gordon Liu had a role in just about all of them, which shows what a popular and influential figure he was in the industry. Who even comes close in Hollywood? Pedro Pascal??

    If I had to recommend one Shaw Brothers film, it would still be Dirty Ho (1979) which I’ve mentioned here before. It’s essentially the same winning template that Jackie Chan’s career was built on — lots of brilliant, intentional fighting moves masked as accidents and incompetence.

    The most uneven one I saw was Legendary Weapons of China (1982), which has about five different plot lines running through it, all to provide flimsy justification for the spectacular finale in which 18 (!) different Chinese weapons are brandished, and as many fighting styles showcased. It’s like Don Draper pitched that idea on a whiteboard and then they had to come up with another 70 minutes. There is an extended action sequence in a straw toilet hut floating over a river, where people literally end up in the muck. This absurd scene involves both kungfu and possession with voodoo dolls.

    In another realm of absurdism lives Dogtooth (2009), the debut film of Yorgos Lanthimos which made a splash at Cannes that year. I saw it on MUBI this week because I liked Bugonia (2025) and wanted to start at the source. Jesus, this film is an exercise in creating the wrongest setup and then having its characters do things that follow on logically but are still nevertheless very wrong. You get the sense of perversity for the sake of it, or to give life to the director’s own kinks, sort of like Tarantino putting his foot fetish in everything — but still actually much worse.

    However, do something terrible with craft and conviction, and it will gain lasting historical value. That’s how this world works; I don’t make the rules! But what if you don’t actually make the thing and just have the idea. In the form of a prompt, let’s say?

    Images that never happened

    Google released their Nano Banana Pro image generation model recently, and I’m sure we’ve all seen examples online by now. Things have progressed to the point where I’m constantly questioning the veracity of things I see online, and I think at some point the mental filtering will become so tedious that we’ll simply stop wondering and accept things that are true and untrue equally. If the short-form video that ruins your brain’s ability to focus and feel joy on normal terms makes you laugh, who cares if those things really happened? And then it will extend into other parts of life, and then… who knows?

    I decided to see if Nano Banana could place me in ROSALÍA’s LUX album cover and, of course, the answer was yes. Too easily, in fact. I only supplied it with a single forward-facing photo of me at a dinner table, and it was able to extrapolate what I’d look like from a different angle. We are, ladies and gentlemen, so cooked.

    It was also Black Friday sales week, and I decided to give VSCO Pro’s annual subscription a try at 50% off (hard to justify at full price). In addition to their Pro set of filters, which are actually really good, it also comes with access to AI tools, of course. Their object removal is state-of-the-art, to the point that it can invent very believable portions of an image that you wouldn’t notice unless really scrutinizing the scene. After a few experiments, sculpting messy scenes in old photos into what I wished they actually looked like, I had to step back and ask myself what the hell I was doing. Apple’s refusal to let the iPhone create “images that never happened” is absolutely the right stance.

    What becomes of designers?

    AI’s obviously going to change the way we work, and I’ve been worrying for a while now about the future of the design profession. About the people who do this work, whether they will continue to be attracted to it, who will pay for their services, and what those services will actually look like. It’s been hard to imagine timelines that are positive by the standards I care about.

    As with many sectors that have experimented with AI tooling, I often hear that senior practitioners using generative AI models can get more done “on their own” — the highlighted phrase implying 1) without the assistance of those pesky junior people, and 2) more cheaply for the business. But just because the tasks once performed by junior people can now be done by AI doesn’t mean juniors can’t find something else to do, or don’t need to be trained anymore. Nevertheless, some business leaders are acting as if that were true.

    A friend told me how it’s now possible to run a small agency powered by seniors + AI only, without any junior hires. They were surprised that I pushed back — but the idea sounded irresponsible to me. It’s one thing if you can’t find employment and have to embrace AI to put food on the table. It’s another to be in a position of strength late in your career and choose this. If you can’t afford to leave the ladder down behind you, I said, it would be better not to do it at all.

    But because bean counters can always be counted on for short-term thinking and a reluctance to spend on design, some companies will go further and not hire AI-augmented senior people at all. They’ll either use inexperienced juniors or ask someone like a product manager to handle “design stuff” on the side using AI. Depending on how much the tools improve, the visible outcomes of this may seem acceptable for quite a while! Design won’t go away as a function, it’ll merely be handled by a different group of people.

    My main concern has been that doing a good-enough job in this way will scale so well, and become the dominant approach so quickly, that we’ll lose the diversity and depth of craft that comes from having human practitioners out in the real world, doing things like interviewing users to understand outlier behaviors, reading contextual cues and hearing what they don’t say as much as what they do. Then using these unique stories to make the larger design solution more resilient. It’s a job that humans are well equipped to do. A business that relies on AI to create an average of best practices may happily miss all of it.

    Why do I think this matters? Because while a bunch of LLMs trained on world knowledge (including artifacts from past design exercises) will generate pretty good insights and workable interfaces from a wide field of generic possibilities, it’s still a path to a monoculture of experience. And if we break the chain of passing down the skills to do the work, then some future post-AI generation will have to learn them all over again.

    I wondered if there might be a market for artisanal human-led design work. After all, centering the role of human craft has kept the luxury goods market alive in the face of mass manufacturing. But that would mean it becomes something performative, and necessarily restricted to higher paying customers. I actually believe that AI augmentation can produce better work; I just don’t trust our economic systems to nurture it over cheaper work.

    Teach an LLM to fish…

    Then this week, I saw something on TV that seemed like an apt analogy and put me into a more zen state of acceptance. It was an episode of Japanology Plus on NHK, with long-suffering host Peter Barakan forced to go out on a small fishing boat in challenging waters. I was honestly surprised the producers/insurers allowed a man of his age to do it.

    Anyhow, as they were heading back from being thrown around by the waves, he asked the captain how fishermen in the old days would have survived that ordeal without GPS, walkie-talkies, and engines. The captain’s reply was that it was more dangerous back then, and they had to use their experience and intuition, navigating by looking at the mountains and stars, reading the winds and currents. You can imagine many lives were lost on the job.

    Would any of those old fishermen trade places with their descendants today, giving up those seafaring skills for the ability to catch many times more fish and live twice as long, comfortably? Very likely! Modern fishermen are still out there on the ocean but their technology distances them from intuiting the waters in the same way. We also know now that the scale at which they fish those waters is unsustainable.

    Likewise, there will be more designers in the future, less skilled by today’s standards but able to oversee projects too complex for us to fathom. Maybe with worse overall outcomes for the world than if we’d never opened the mystery box of AI. But I realize now that this pattern of losing one thing to gain “something more, but worse” is simply an inevitable law of the universe. Two steps forward, one step back.

  • Week 38.25

    Week 38.25

    I type this while listening to Sam Fender’s last album, People Watching. I’ve been meaning to hear this through for awhile, but it got buried in my ever-growing library of new music. Thankfully, with the latest update to Apple Music in the OS26 series, you can now pin up to six albums or playlists to the top of your screen. I’ve wanted this sort of ‘Now Playing’ or ‘Heavy Rotation’ virtual shelf for the longest time — it’s the first feature I’d add if designing a music player app. So this album and five other neglected ones are now sitting up there, and I can give them the attention I want.

    I’ve been listening on both my new AirPods Pro 3 and an original pair of AirPods Pro, and dare I say the difference is quite obvious. Louanne asked me what I do with old pairs of headphones when I get new ones, and the answer was “put them in different rooms!”, of course. I’m fast running out of rooms. The new model sounds much more Beats-like than ever (modern Beats, not OG Monster Beats). That is to say, a bass-forward sound with a very clear, almost sparkling high end. It’s a fun sound, and I think they’ll be very popular for all kinds of music, if not audiophile-grade neutral. They appear to fit better than before too, and the difference in body shape will strike longtime AirPods users as soon as they pick them up.

    Then my new iPhone arrived, and before you judge, the old one is being returned to Apple’s Trade In partner in a few days, where it will hopefully be responsibly refurbished or at worst recycled. They’ve suggested that I’m likely to get nearly half the original cost back, which is an astounding deal for a two-year-old model! I’ll believe it when the deposit lands in my bank account.

    I’m very happy I decided to stick with the Pro Max size instead of switching to a Pro. The slight increases in height and width are visible if you put them together, but isn’t really noticeable in the hand. The increase in thickness IS, but combined with the new gentler corners on the seamless aluminum body, I think thicker is actually better? This might be the best feeling iPhone ever.

    I’ve yet to put the new camera system through its paces, but I’m excited and very pleased after a couple of days with it. Images look cleaner, and the redesigned front-facing camera is a revelation. I took a test selfie and could scarcely believe how presentable I looked. Coming from the iPhone 15 series, I’m also new to the new Photographic Styles that were introduced last year, and am getting a lot out of them. I compared photos shot in RAW with Halide and in HEIC with the default camera using a tweaked “Natural” style, and they’re extremely close in both SDR and HDR. This is a big deal! Along with the revised Photonic Engine this year, the dark days of overprocessed iPhone photos may be behind us.

    When reviews get creative

    One thing I’ve noticed this year is how bland and predictable the video reviews from the usual tech YouTubers and influencers have been. They go through the spec sheets while speaking to the camera, do a few test shots, and end without any thoughts you couldn’t have pulled out of ChatGPT. But then I saw a couple of videos from the Chinese-speaking side of the internet, and that’s when I realized Western civilization is well and truly finished.

    Take a look at these and tell me you’re not duly impressed by the storytelling creativity, production skill, points of view, and passion on display — even if you can’t understand a word (but most of them have English subtitles you can enable). They could just shoot the phones on a stand while swinging a light overhead, but they instead they go hard with CGI, costumes, sets, comedic sketches, and cinematic editing. And they do these in the WEEK they’ve given between the phones being revealed and launching.


    We visited a local art sales event for works based on the Ghost in the Shell: Stand Alone Complex franchise, just to have a look. The metal-printed pieces were going for upwards of S$4,000, so there was never any chance we’d buy one — but I got a little acrylic plate standee for my desk (S$25). Above are some snaps straight from the iPhone 17 Pro Max, using only Photographic Styles.

    Afterwards, we visited the SG60 Heart&Soul Experience which is being housed at the site of the old library@orchard. Supposedly it will be renovated and return as a downtown library next year, which is great news. From what I can gather, it’s meant to inspire people about what Singapore’s future might look like, and what place they’d have in it (employing lots of tech to personalize the journey). Criticisms I’ve heard are that it doesn’t go far enough, and the future shown looks kinda like the present: delivery drones, working in VR headsets, greenery everywhere. Visit and see for yourself. Bookings are required, but the tickets are free. It’s quite an involved production with each visitor being given a guide device (an encased Xiaomi smartphone) to wear around their necks, and human facilitators bringing them through the stations.

    Oh, speaking of cases, we went by Apple Orchard Road after the show to have a look at the iPhone Air, and I haven’t seen that store so packed in years. I picked up a rather loud Beats case in “Pebble Pink”, mostly because I really wanted a Beats case last year but they only made them for the iPhone 16 series. It’s hard plastic with a matte finish that’s slippery when your hands are dry but tacky enough if there’s a bit of moisture.

    Check out my reel with the Pink Panther theme:

    And while we’re on the subject of great directors, I finally sat down with Bernardo Bertolucci’s The Dreamers (2003) on MUBI. More explicit than I expected, it’s easy to see why people call it pretentious with its heavy callbacks to classic cinema, but it’s never boring and it sure knows how to use mirrors. I gave it 3.5 stars on Letterboxd, mostly because there’s “altogether too much time spent lying on floors for my liking”. There’s also one truly revolting moment where, out of money, they raid the apartment building’s trash for scraps of food and assemble the world’s grossest bento.

    Spike Lee’s Highest 2 Lowest (2025), now on Apple TV+, is a remake of Kurosawa’s High and Low (1963), which I’ll embarrassingly only get around to watching after this homage. But this is a fine film that stands on its own: a sharp, sometimes experimental exploration of class and morality, constantly playing on the gulf between generations — Motown vs. modern rap, film vs. digital, Kurosawa vs. Lee.

  • Week 21.25

    Week 21.25

    • This week’s installment is update #256! That’s a big deal for fans of computationally significant numbers.
    • Oddly, just as Michael blogged that his family was down with gastroenteritis last week, a similar bug hit our household. Kim got a bad case of what we thought was food poisoning, and then, of course, I came down with it two days later. It’s a weird one: the stomach trouble comes with headaches and fatigue. Fearing that it was contagious, plus being all tired out, I had to skip one meetup and a wedding dinner over the weekend.
    • So not that much happened with me this week, but I suppose we can talk about the mess out there?
    • Google held their annual I/O event and showed off their latest AI achievements. Tl;dr, some of this stuff is just gross. From a technical standpoint, yes it’s remarkable that pretty realistic video (with sound) can be generated from text, and Google can now use your personal context from documents and emails to help with tasks — similar to what Apple promised (but has yet to deliver), with the important distinction that one centers privacy and on-device computation while the other will do it on their servers. I don’t know if I trust Google to let an AI crawl all my documents, and for that reason, minimized my exposure to Google years ago.
    • I remember when Gmail first came out and pioneered showing contextual ads next to your emails. There was an uproar, and the company had to calm people by saying ‘no one is reading your emails, it’s just automated keyword matching’. Well, with LLMs, it’s much, much worse. No human would be able to go through all your messages, photos, location data, and search history to piece together an invasive psychological profile about your vulnerabilities, and make it actionable for advertisers. Trust that an AI will. Just look at how a version of Claude 4 Opus in testing tried to blackmail an Anthropic engineer over an affair it believed they were having.
    • Beyond the business model of Google’s AI products, it’s their designed intent that feels particularly bleak. One example they proudly demoed: a friend emails asking for holiday recommendations at a place you’ve visited. Instead of writing them a thoughtful reply, you let Google AI scour your photos, emails, documents, and receipts to auto-generate a message, using words chosen to sound like you. Sundar Pichai even had the nerve to say, “With personal smart replies, I can be a better friend”. With friends like these, don’t even bother writing. Just ask Google directly and it can snoop the inboxes of a billion other customers to give you the statistically “best” itinerary.
    • And this is where a company’s lack of imagination and care makes itself plainly apparent. Instead of designing an AI system that writes replies in your place, they could have made one that recaps your holiday with a little presentation showing you where you went, what you did, and what you enjoyed most. Then, memory suitably refreshed, you could sit down and write your friend a reply that shows you actually give a shit about them, and both of your lives would be richer for it. I’m beginning to think that Apple, by failing to ship their AI features on time, might be saving us from a future I don’t want to live in. Maybe they’ll never ship. Maybe that was the plan all along.
    • Then a man who many would expect to know better — who sits on stages and professes the importance of values in technology, and is arguably the most famous designer of this century — announced a new venture (also called io) with Sam Altman and OpenAI. And the vibes, my friends, were off. The 9-minute launch video came across as a thin PR exercise to polish Altman’s spotty public image and reassure OpenAI’s investors. It struck an uncharacteristically self-congratulatory tone for the usually humble Ive while announcing, essentially, nothing. Sterling Crispin tweeted a biting Marxist read of the video in the style of Slavoj Zizek. Many saw it as Altman angling for a Steve Jobs comparison, with critics pointing out that he’s not enough of a product person to be a true partner (and necessary editor) to Ive. In any case, I have no doubt that the io team will deliver some beautifully designed hardware. But I fear even they can’t summon enough thoughtfulness or optimism to divert AI from its current trajectory, or prevent the cultural and societal wreckage it’s likely to create.
    • Speaking of nice devices, one of my fondest gaming memories involves my last year at university, when I switched from my PC to a Mac that was relatively useless for playing games. I couldn’t imagine not having any games (this was before smartphones, mind you), so I bought myself a Game Boy Advance SP — the pinnacle of the series, in my opinion. It’s hard to imagine a more perfect form factor for a handheld console. The clamshell design protects the screen; the vertical layout keeps your arms and hands close together, even when squeezed into an economy flight seat; the screen was sharp and self-lit, which was not a given in those days. I would lie in bed in the dark on cold nights and play Final Fantasy I until I was bored to sleep. To this day, I still fall asleep during boring turn-based battles — one of the reasons why I want to try Clair Obscur Expedition 33, since it blends turns with real-time actions. Pretty sure I traded in my GBA SP for store credit to buy the Nintendo DS (DS Phat) when it came out later that year, and I regret it.
    • Anbernic, the China-based maker of retro emulation handhelds, released a clone of the GBA SP last year, the wonderfully named RG35XXSP. It runs a Linux-based operating system and may or may not come preloaded with thousands of classic gaming ROMs from the NES to the PS1 — you should only play the ones you actually own, of course. I didn’t get one because Anbernic doesn’t have the best rep when it comes to build quality, and I wasn’t sure they’d get it right the first time. Fast forward to today, and the new RG34XXSP (yes, it went back a model number) looks to be the one to get, even over the competing Miyoo Flip V2. DYOR (do your own research) though! They made it slightly smaller, dampened the button clicks, added two analog thumbsticks for wider compatibility, and came up with some better colorways. Now that it finally sounds more like a product than a prototype, mine’s on its way in the mail, and I can’t wait to relive some old games while waiting for the Switch 2 to arrive.
    • Meanwhile in Japan, Fujifilm decided to get in on the nostalgia cash grab game with a new camera release, the long-teased “X half”. Many expected this to be based on the X100 or GFX series, that is to say, employing an APS-C or medium format sensor. But no, it’s a 1-inch sensor, vertically oriented to resemble 35mm half-frame photos. A major selling point is the ability to shoot “in-camera” diptychs (a two-photo collage). The overall camera concept is fantastic: it has a 32mm lens, is very small, and features a “film camera mode” that is basically a physical version of those iPhone apps that try to simulate the fun limitations of analog photography. When activated, you are locked into a chosen film simulation look until you’ve used up a virtual roll of film, ranging from 36–72 half-frame shots. You can’t see any photos until you finish, and you can’t even use the digital screen to compose shots, only the optical viewfinder.
    • I would be down for this, except that early reviews show many corners were cut between concept on paper and the final product. Firstly, it’s made of plastic painted to look like metal. There are no words for how much I hate this when done badly. There’s also a visible seam on the front face that ruins the look. Then the processor seems to be slow, and it ruins the illusion of the film advance lever which you need to crank before taking the next shot; it’s reportedly unresponsive until the last photo has saved. If you’re going to fake analog mechanisms then it has to be perfect! The flash unit, a big part of the analog film camera look, is a weak LED rather than a xenon bulb. Then there are the cheesy overlay effects, like light leaks and “expired film” color casts, which seem borrowed from the company’s Instax evo cameras rather than its premium X series. The camera costs S$999, which many are calling too high, but honestly if they had to charge S$200 more to actually do the concept justice, I’d be on board. If some random startup made this for half the price, the flaws would be forgivable. But this is Fujifilm, and if you’re going to carry this faux film camera around and look like an old douchebag with more money than sense, it had better be good.
    • Until they do a better job, I’ll get by with the Diptic app I bought in 2010 which makes similar collages with just my iPhone (see featured image above), which also shoots vertical orientation photos by default!
  • Week 17.25

    Week 17.25

    • When I discovered a fix for an annoying iOS photo date/time saving bug last month, it required a reset of all my phone’s settings. Which means that many of my apps still can’t send notifications or use my location — these are being restored ad hoc, as the apps only get to ask for permission whenever I reopen them.
    • As a consequence of this, it wasn’t until recently that I suddenly noticed I wasn’t getting the twice-daily ‘State of Mind’ check in reminders from Apple Health anymore, and went to turn them back on. These are quite useful for being able to look back and see how happy/depressed I was at any point in time, and it sucks that I now have a big hole in this dataset.
    • I’m taking this opportunity to change the way I approach this exercise: literally being more positive. For those unfamiliar with it, you’re meant to rate how you’re feeling from Very Unpleasant, Slightly Unpleasant, Neutral, and so on. I never used to go up all the way to “Very Pleasant”. Like, I could win a million dollars and wonder if even that warranted using such strong language. But now I’m giving myself permission to be more generous with my feelings. I can feel “Very Pleasant” more often and nothing will get broken.
    • In other recalibration news, I spent half a day in Numbers (Apple’s spreadsheet software) and did a personal annual report of sorts to inspect how I’ve been managing my money in the last year. Now that I have enough data, I was able to build some graphs and breakdowns of what a realistic budget looks like. I’ve always recorded my expenses on a daily basis with an app, but never crunched the numbers before; I was happy just knowing that I could. Naturally, now that I have, I wish I’d done it years ago.
    • At several points during the above activity, I wanted to upload my file into ChatGPT and have it analyze my spending patterns and offer up some money-saving strategies for me to consider. But of course, giving OpenAI that data would be a terrible idea. I wondered if Apple Intelligence in Numbers could do anything with it, but nope. It’s just the same old Writing Tools that make more sense in a word processor document than a spreadsheet.
    • I spent most of my time reading this week, although the temptation to jump on the Clair Obscur: Expedition 33 bandwagon is very strong — at this point in time, it’s the highest-rated game of 2025 and the 13th best game of all time on the PS5. Maybe next week?
    • On top of finishing Broken Money as scheduled, I read Meditations by Marcus Aurelius and rated it just three stars on Goodreads. It’s absurd that one can even humble the great emperor with a thumbs down on the internet, but his journals are in dire need of an editor! Yes I know these were never meant to be published, but he repeats the same handful of principles over and over (which I largely agree with), and this could have been cut down to be a podcast or self-help PDF on Etsy?
    • I also read the second book in the Murderbot series, Artificial Condition, and found it even more fun than the first. At this point, I’m kinda desperate to watch the Apple TV+ show and not sure I can wait for weekly drops over the next few months.
    • While looking for a manga with some colored pages to try out on my Kobo Clara Color, I started reading the oddly titled I Want to Eat Your Pancreas (trust me, there’s a mostly acceptable explanation for this), and enjoyed it enough to finish everything in a day. Unfortunately, while experiencing color on an ereader is real nice, the Clara’s screen is too small and I read most of it in black & white on my old Kobo Libra. Irony!
    • I’m now close to finishing Erik Olin Wright’s How to Be an Anticapitalist in the Twenty-First Century, not because I dare dream that this world could ever abandon capitalism, but it wouldn’t hurt to have some alternatives. I’ve now begun to think of capitalism and its systems as a spectrum rather than as absolutes.
    • Coincidentally, it’s election season here in Singapore and we go to the polls next Saturday. I’ve been watching the political rallies live-streamed on YouTube over the last few evenings, and have mostly been annoyed at the complaints and vague promises to do better (to say nothing of the insanity that sometimes shows through in racist, antivax, and xenophobic ad libs). Singapore already enjoys some of the best quality-of-life outcomes possible under a hybrid capitalist/social democracy, but it seems people want more. They want a hands-off government, but also want it to protect them from job-stealing AIs and foreigners. They want everyone to be paid more, but don’t want to spend more on services in return. It’s so exhausting.
    • I was looking to spend some money on a birthday present for myself since I’ve been such a good little budgeter, but the best thing I could find were these new Yashica Peanuts cameras featuring Snoopy, who I’m still kinda obsessed with. The idea was quickly abandoned because I do not need another digital camera, especially not an intentionally mediocre one.
    • Still in the mood for a photo-related splurge, I went back to the Lampa camera app (first mentioned last September), which got three new ‘film-inspired’ color profiles in a major update this week. That brings the number of available looks up to six, which better justifies the high asking price (S$40/yr or S$90/forever) which was previously out of the question. I’ve been using the free trial while looking for cheaper alternatives, but Lampa just does what it does so well.
    • In terms of UX design, it’s super focused and perfectly walks the line between too simple (Zerocam) and too complicated (almost every competitor I’ve looked at). There’s just enough control, and few enough options that you can actually make decisions. In that way, it out-Leicas the official Leica app, which does not have a great UI and asks for S$100/yr. Technically, it uses a bayer RAW image pipeline for more natural captures, keeps those RAW files so you can “redevelop” photos if you didn’t get the right filter or exposure the first time, AND deletes those RAWs automatically after 30 days to save space. Hats off to great work, but man, the cost is uncomfortably close to buying an actual Snoopy camera.