Tag: Generative AI

  • Week 16.26

    Week 16.26

    We attended my aunt’s funeral on Tuesday. My complaints about the Mandai Crematorium mostly still stand, but they’ve at least moved the ugly signs printed on office paper away from the viewing windows so you can see the casket on its way to the… furnace?

    As I said last week, she was 93 and the family was mostly prepared for this. But there were tears, and some meaningful words were said, and despite my irritation with the undignified air of the Crematorium’s processes, I was struck at a mostly subconscious level with a sense of loss. Because a couple of days later I was thinking about orchids.

    Since I was a child, I’ve known orchids to be a part of my family’s story. My paternal grandparents were enthusiastic orchid breeders as well as co-founders of the Mandai Orchid Garden, where they helped raise the profile of Singapore’s orchids at home and abroad. I was surprised to learn while writing this that orchids are still an instrument of Singaporean diplomacy. Although I never had any interest in them myself, my late grandmother is defined in my memory by her fondness of them, and several other relatives (including the aunt who just passed) had hybrids named after them, created by my grandfather.

    As mentioned last week, I have been experimenting with generative art and it entered my mind that I could try to simulate orchids — creating infinitely unique flowers and plants in code. Now, this is nothing new. Humans have been trying to reproduce natural processes like botany with algorithms almost as long as we’ve had computers. But the more I thought about bringing millions of digital orchids to life, the more I thought about where they would go after. To create a beginning is to guarantee an end. The result is a digital artwork I’ve called Orchids, Once. and it’s a sort of meditation on impermanence.

    You can summon a new orchid into existence, but know that you’ll be the only one who ever sees it. When you leave or reload the page, it’ll be gone. Does the fact that there are potentially billions more make it less special? Or that it cost nothing? Or that it’s not technically “alive”? In any case, I hope people will cherish the brief amount of time they spend with each flower. I didn’t design a “retry” or “new orchid” button because the responsibility of ending a session should rest with the viewer.

    Orchids, Once. also stems from the generative music experience I gained while making DataDeck, and features an ambient soundtrack that’s created in real time as the orchids turn and sway in the digital wind, as unique and unrepeatable as the flowers themselves.

    I had to work with both Gemini and Claude to get this thing in shape. I didn’t save enough screenshots of the development process, but here are two from the prototyping phase that AI would have you believe were good enough to ship, and that look like orchids.

    Many hours of refinement later and I had models that could pass for plants, but had a nasty habit of growing backwards into themselves, or occasionally mutating into unholy jagged messes. I thought they were finally getting somewhere, but then we took a trip to a plant nursery nearby for a little field research. I spent some time looking at dozens of real orchids and taking pictures, and came home with lots of changes to make. I have learnt more about orchid anatomy this week than I had from decades of being in an orchid-breeding family.

    I also can’t help but reflect on the past few weeks of making things in code with AI — this only started on March 1, but it feels like months ago. Orchids, Once. is my 10th “app” (but the 9th released).

    The first few toyed with pulling data from online sources: Collagen pulled album art from iTunes, Urban Jungles pulled weather data from Open-Meteo, SkySpotter pulled air traffic data from OpenSky.

    Then the next few pulled data from online sources and tried to make something new out of them: Library Supercollider mashed up texts from Project Gutenberg, CommonVerse let you play with words from a dictionary, DataDeck generated music from public Singapore data feeds, and Crumbs let you build your own “maps” with location data.

    The most recent ones? They’ve been about generating their own assets out of nothing, without drawing on external data: the GenArt wallpaper/image maker I’m still working on, daily 3D mazes to escape from, and these orchids. These shifts weren’t conscious or planned, but it’s curious to look back and notice it.

    I’ll stop at 10 for a while, and maybe pick things up again after I get back from my holiday.


    One bit of housekeeping: I found the time to revisit my first app, Collagen, and make some improvements I’ve been wanting to see for a while. You can now use images in different aspect ratios, not just squares. And each image can be zoomed and cropped really easily with a new editing overlay. You no longer lose images if you change the grid size, text cells can be edited, and the UI has been given a mild glow up. I feel like I’ve learnt a lot since then, and this v2.0 brings things up to date.


    Media activity

    My book club finally finished reading Michael Crichton’s Sphere and I gave it three stars on Goodreads. In the end, my vague recollections from reading it as a teenager mostly held, although a slightly racist and sexist worldview permeates the text, and I’m sensitive to how much that would not fly today. I’m eager to see how the film adaptation handles that when we watch it together next week, as it was made a decade later.

    The second season of The Pitt ended after 15 episodes and damn I’m going to miss it. This is a show that alerts me to how ignorant I am of certain (most?) social dynamics and other signs people tend to give off.

    Speaking of the series in general so I hope this doesn’t spoil anything for anyone, but suicidal ideation is a recurring theme that I didn’t take very seriously — which is the whole point of the show’s handling of it.

    I go on Threads after every week’s episode to read people’s takes and interpretations, and I’m always learning something. This week some people got mad that men don’t take this suicide stuff seriously, or can’t see it at all and can’t talk to their friends, and I guess I’m a little guilty of that. I didn’t know the character on the show was thaaaat serious, and thought “eh, they’ll walk it off. It’s no big deal, everyone imagines it sometimes.” Apparently not.

    Unintentional death theme continuing: I watched a Japanese film on MUBI: Super Happy Forever (2024). It’s about a widower who goes back to the seaside town where he and his wife met on holiday. It jumps back and forth in time and does a few other things that should yield more emotional impact than it does. I wrote on Letterboxd: I think the ingredients of a proper 4-star movie, the kind you rewatch every five years, are here but not properly assembled. Nairu Yamamoto is so lovely, so magnetic in all of her scenes that she redeems her supremely annoying partner like the best of people do. Shame.

  • Orchids, Once.

    Orchids, Once.

    View the digital artwork at https://orchidsonce.xyz


    Almost every orchid you’ve ever seen was intentionally bred — a slow accumulation of crossings, selections, and genetic accidents that produced something new. This is the same process, compressed into a digital instant. Every visit generates a unique specimen: structure, colors, and proportions assembled from code the way a real orchid is assembled from DNA. No two will ever be alike.

    As it turns in the light, you’ll hear music shaped by the flower’s appearance — the soundtrack itself is a one-time miracle, as unique as the visuals on your screen. Its presence completes the meditation.

    When you close the window, the orchid dies. There is no save state, no gallery, no record of what you saw. Each plant lives only as long as you stay. If you weren’t there, it wouldn’t exist at all.

    There is always another one waiting to grow — but not that one. Never again that one.


    Disclaimer: I made Orchids, Once. with the help of Gemini and Claude LLMs, and take no responsibility for any allergies or other harms.

    Related blog post: Week 16.26

  • a maze, a maze, a maze…

    a maze, a maze, a maze…


    Play a maze, a maze, a maze… at amaze3.app


    Every day, a new maze appears. Everyone in the world gets the same one.

    There’s something cozy and comforting about knowing that right now, somewhere, another person is navigating the same corridors, hitting the same dead ends, and having the same moment of doubt about whether they just walked in a complete circle. Some days the maze is generous and you are out in twenty seconds. Other days it will make you work for it, and you will feel the exit before you see it.

    Each maze has a target time based on the shortest possible path. Finish close to it and you’ll earn an S-rank celebration and a shareable stats message. Go slower and you’ll land somewhere between a laudable A and a sad D — either way, there is always the group chat to prove you showed up and tried.

    Three modes: Standard comes with breadcrumbs showing where you have been; Hard Mode removes them and trusts you to hold the map entirely in your head; Chill Mode turns the timer off for people who just want to wander. Themes range from an outdoor garden maze to a retro game dungeon, so you can get lost in a way that feels right for you.

    A new one tomorrow. And the day after. A maze, a maze, a maze.


    Disclaimer: I made a maze, a maze, a maze… with the help of Google’s Gemini 3 Pro LLM. No responsibility taken for wrong turns or damaged self-esteem.

    Related blog post: Week 15.26

  • Week 15.26

    Week 15.26

    I’m looking through my camera roll to remember what happened this week and it’s mostly a bunch of “artworks” I’ve been making. Wait, let me step back: I’ve had an interest in procedurally generated graphics (GenArt) for awhile, and it peaked with the NFT boom of 2021–22, where I spent a relatively obscene amount of money minting and collecting artworks I really liked (not the monkeys). I’m mostly drawn to the idea of mathematically rigid routines producing organic beauty — the contrasts in that, and the unpredictability of what you get when you roll the RNG dice.

    So after my recent experiments in making apps, I wondered if I could get AI to write me code that would generate images based on concepts I described. The answer is, of course, yes! It’s important to note this isn’t prompting for images (like when you use Midjourney or DALL-E), it’s prompting for the math behind making images. And once you’ve created the rules by which it draws different art styles, you can create a nearly infinite number of unique artworks by dialing different variables up and down.

    One example is a “style” I made called Labyrinth, which produces actual, solvable mazes. Depending on the variables you adjust, you can make mazes ranging from tiny to massive, with just one solution, or many. If you asked an image generation AI to draw a maze, it would likely lack the coherence of a real maze, because of the way it operates — focusing on the superficial appearance and not the integrity of its paths. But an AI model can make the math to draw a maze.

    I start most of these by thinking up an artistic production approach, say “take sheets of colored cardboard or acrylic, and punch holes of varying shapes into them, then layer them on top of each other so the holes line up (or not), and randomly spray contrast-colored paint on some of them”. Then I describe the possible variations and variables I want to control to the AI, such as the density of shapes, the thickness of the borders, the ratio between angular and organic lines, and we iterate after seeing some of the results. Just think of all the methods and ideas you might want to play with, and how this lets any old idiot model them on their computers!

    The meta project is that I’ve made a modular app that handles all these different styles for me, whether they require a 2D canvas or WebGL. The app provides a common UI layer that all “styles” can plug into, which allows me to control them. Now that it’s done, I can just focus on experimenting and having fun making new artworks. I daresay a few of these are executed as well as any of those I spent money on.

    I’ll probably release it as a wallpaper generator once I have enough styles built in, if anyone’s interested. But mostly I love having this as a background project that I can dip into, on and off. It allows me to take on other app ideas as momentary “side quests”.

    While making Labyrinth, I showed a maze to Cong, who said “You should do a puzzle maker”. To which I said, “Nah.” And then a minute later… “Although, a daily maze game. Hmm.” It made sense that I could save time by taking CommonVerse’s daily random generation mechanic and combining it with Labyrinth’s logic to make a daily maze challenge. But would it even be fun to trace a 2D maze with your finger and try to solve it? No… so what if it was a 3D maze you had to escape?

    The first prototype took a couple of hours, and I’ve been polishing it for the last few days. I think it’s coming along nicely. I’ll put it out soon, once I balance the difficulty and get more feedback from testing.

    The development of a maze, a maze, a maze… was hampered by a rare bar crawl with Howard and Jussi on Thursday night that gave me a massive hangover lasting into Friday afternoon. When I got home, I was too plastered to care that my vinyl copy of J Dilla’s Donuts had arrived from Amazon US protected by nothing more than a flimsy paper envelope. By the clear light of day I was amazed that they would even do such a thing. The discs are intact, but the sleeve has a bent corner. If I’d ordered from Amazon Japan, I would bet a major internal organ that it would come wrapped in four layers of stiff cardboard, bubble wrap, and a handwritten apology for their carelessness.

    Did I mention we’re going to Japan again? It’ll be a short vacation, in a couple of weeks’ time. Not much on the agenda, just checking in on the state of curry rice and egg sandwiches. Maybe see some nice art. Take some photos.

    Which brings me to the latest betas of Halide MkIII, which I’m very much looking forward to using on the trip. They’ve been progressing the app nicely, and it might be enabling the Holy Grail of iPhone photography workflows for me. Ironically it involves using Halide not as a camera app, but just as a photo editor. You can shoot compact (lossy, JPEG-XL compressed) ProRAW photos up to 48mp with the default camera app, then edit them in Halide to have the same look as their Process Zero photos! What this means: you get all the benefits of computational photography at time of capture, including noise reduction and night mode, but you’re also free to dial it back and get natural, “real camera” photos in post if the scene calls for it.

    As much as I like these side quests, I think making my own photo editor would be biting off entirely too much to chew, so I’m still rooting for these guys to crack it.

    While writing this post, I got the news that an elderly aunt passed away at the age of 93. She had been in reduced health since the Covid years, but by all accounts she went very peacefully and I guess you can’t ask for much more than that after a long life. The extended family’s Chinese New Year routines fell apart in recent years after she pulled back from organizing them, so it was fitting that some of us got to reconnect at her wake on Sunday evening.

    See you next week.

  • Crumbs

    Crumbs

    A location journal that’s actually yours.

    Try it at CrumbsMap.vercel.app


    Most map apps are for navigation, not remembering. You know those sequences in old films like Indiana Jones where a dotted line traces across a map from city to city? That is what Crumbs does, except it is your life and the dots are places you actually went.

    Effortless location logging

    The idea is simple: press a button to log where you are, write a note if you feel like it, and watch your trail build across the map. No passive background tracking, no accounts, no selling your movements for ad targeting. Just the places you chose to remember, connected by a line, yours to keep.

    Most travel apps get this wrong in one direction or another. Google Maps’ Timeline tracks you constantly whether you want to remember or not. Swarm needs a business listing to exist before you can check in. Neither lets you draw a line across a whole week, or a custom trip length, and export it cleanly. Crumbs does all of that, and stores everything locally on your device.

    Do things with your data

    There is a list view that lays out your stops like a journal with timestamps, weather, and location metadata, exportable as a PDF keepsake. You can also save a clean image of your map at any point, ready for sharing or scrapbooking.

    Crumbs is a PWA (Progressive Web App) which means mobile operating systems may occasionally purge its local data if not used in awhile. However, connect your Dropbox account and we’ll sync with the cloud automatically, so you won’t lose a crumb. If Dropbox isn’t your thing, manual JSON export and import are available for backups. Either way, your data is yours to keep and use freely. Vibe code an app to generate custom posters, for example.

    If you want native background tracking that runs without you thinking about it, I recommend Where Now? — a free indie app by Scott Boms that also logs your location privately. Crumbs can import Where Now’s data exports so you get the map trails and other features. Best of both worlds.

    Other details:

    • Pins capture location, date/time, city/country, and current weather conditions from Open-Meteo.
    • Works offline: Pin your location while off the grid, and Crumbs will show it on the map when you’re back online.
    • Filter map and list views by Today, This Week, This Month, All Time, or a custom date range to see only specific trips.
    • Uses standard Plus Codes as a shorthand for geolocation, so PDF exports retain all relevant information in a human-friendly form.
    • Open any pin location in Google Maps for more detail on nearby places of interest that were involved in your moment.
    • Minimal, glassy UI that “puts the focus on your content™”.
    • Bread mode: replaces all red pushpins with baked goods.
    • Red string trails can be disabled in settings.
    • Pins can be moved via drag and drop if necessary.

    Disclaimer: I made Crumbs with the help of Google’s Antigravity and Gemini 3.1 Pro. Your location data stays on your device. I have no idea where you’ve been.

    Related blog post: Week 14.26

  • Week 13.26

    Week 13.26

    I finished my sixth app: DataDeck. It simulates a fictional hardware music player called the DataDeck SG-01, or more accurately, a music generator. It reads live, open data feeds from the Singapore government’s data.gov.sg portal and translates them into unique musical compositions.

    My first prototype ingested the tourism stats for International Visitor Arrivals to Singapore since 2008, and when I first experienced the silence of the Covid years, with the beat gradually building back up again after 2022, I knew I was on to something. Data sonification is a cool term for nerds, but hearing the stories stored in the numbers is something anyone can understand and appreciate.

    At about ten days of development time, it’s the biggest project I’ve delivered so far with the help of AI — there’s no saying how long it would have taken me to do on my own. A million years? Instead, in just 10 days: parsers for 10 different datasets, 10 varied musical styles, and 10 switchable themes.

    The inspiration for its interface was the kind of hardware devices my dad had in the 70s and 80s: calculators, microcomputers, and tape decks from companies like Braun, Sharp, Sony, and Texas Instruments. A sort of Rams-ian, Bauhaus-ish modernist school of industrial design. The different color schemes you can choose from evoke specific brands or devices, like Apple’s Snow White-era or the original Nintendo Game Boy (DMG-01) and the Roland TR-808. I especially enjoyed working within the constraints of an imagined hardware UI, so when you switch to a dataset mapped to Singapore’s physical geography, the drum pad buttons get remapped to move a reticle around the map. It makes it feel more real, imo.

    The idea of playing with procedurally generated music using software-synthesized Web Audio was probably seeded years ago when I collected the 0xmusic series of art NFTs, which generated endless musical sequences from code on the Ethereum blockchain. I dare say that DataDeck is more advanced, and with better sounding musical output than those. Plus I’m making it free, and you don’t have to risk social judgement by going anywhere near crypto.

    I’m especially proud of the app’s design and musical qualities. There are a hundred little details in this thing I could mention that were cool to implement, but users don’t have to know or care about. Although it’s an app made for myself by myself, I’m still inordinately satisfied with and impressed by it. I’ve helped deliver a few apps in my career (some of them even won awards), but DataDeck already feels like one of my favorites.

    I think that’s because designing in the real-world is all about the navigation of compromises — technical debt, financial limitations, organizational will, and a lack of time all get in the way of polishing features you know could be great, or fixing annoying bugs that other stakeholders don’t seem to mind. Personal projects are not like that, and acceleration with AI makes them even less so. I made this thing how I wanted, and was able to tweak the mix or rebuild a cassette’s music logic from the ground up twice a day if I wasn’t happy with it.

    I’ve also been thinking about how narrow the term “vibe coding” is. On one hand, one-shotting an app by asking Claude to “build me a kitchen timer” is vibe coding. But using AI to create a complex tool where humans design the screens, sweat the UX, and look after the details is also kinda vibe coding. I talked recently about how the distinction between designing and developing will fade, and making stuff is all that will matter, and so it stands to reason that eventually coding with AI will just be called coding.

    I spent Friday afternoon with Jussi meeting up with two separate friends, both also middle-aged men, who are similarly interested in this evolution of design/development work, and who are working on their own projects with Claude Code, OpenAI Codex, and other tools. We’re all at different levels of familiarity and sophistication, but it was good to meet for a little co-working + Show & Tell time at cafes on a weekday. I think there’s value in forming a little “late boomers’ coding club” for fellow initiates.

    In any case, I’m hella tired, guys. I started on my next app idea but immediately got hit by fatigue on Saturday afternoon and needed a nap. Switching gears from audio generation to working on more visually-oriented functions was too much context switching to do over the weekend. Think I’ll finish reading a couple of books first before getting back to it.

    I know it’s been app-this and app-that around here for the last month and so maybe some readers (or a future me who’s been thrown in ethics jail for AI use) will appreciate hearing about other things. Let’s zoom all the way out then, into outer space.

    The film adaptation of Project Hail Mary is getting such great reviews and most people in my book club have already seen it. Unfortunately, I have to wait because Kim has finally started reading it, about three years after I told her to. Hopefully she’ll finish before the local IMAX run ends, but nothing in this life is guaranteed.

    There’s just something about stories of people in space, either lost or stranded, alone or in a small team, solving problems with limited resources, all the while confronted by the massive universe-facing perspective of being so small and meaningless. Andy Weir’s The Martian really resonated with people, and Project Hail Mary is having its moment too. I also enjoyed Daniel Suarez’s two Delta-V books a few years back. But the ultimate one that has yet to be beaten for me is Neal Stephenson’s Seveneves.

    The book I’m reading now might be a serious contender though. I’ve had Samantha Harvey’s Orbital on my list for the better part of a year, knowing very little about it, except that it’s about astronauts. Now that I’ve started, I don’t want it to end, I want more of everything, more words from this magnificent brain. You’ll know by the end of the first three pages whether this is a book for you. It’s intensely beautiful, unusual writing. It borders on poetry — perhaps too melodramatic for some — actually it steals over the border by moonlight and maps the territory. I don’t know how Harvey knows what it feels like to be in space, and what astronauts think about as they look down on Earth, but she absolutely does. You can’t write like this unless you’ve stowed away on an ISS mission and been through it. It’s a monumental work, and the best book I’ll probably read all year.

    Literally on the other end of that spectrum, the book club has decided to read Michael Crichton’s Sphere, which is set at the bottom of the ocean and probably isn’t very beautiful or philosophical. I read it once, maybe thirty years ago, and thought I only remembered the contours of its plot, plus flashes of the 1998 film adaptation starring Dustin Hoffman. As I read its opening pages, I was shocked at how familiar some of the writing and scenes were. It must have made an impression on me.

    Since the moratorium on spoilers has probably passed, I think it’s okay for me to mention what I recall: it’s about a mysterious ship that a bunch of scientists are trying to study in a deep sea lab. As time passes, they experience unnatural events, and it’s revealed that the titular sphere onboard has been “having an effect on them”. It’s a mashup of The Abyss and Solaris, essentially. I don’t want to rush Orbital, so I’m going to put that aside and work through Sphere as quickly as I can.

    Speaking of space, the deep sea, and being packed into tight metal containers, I picked up a can of my usual Ayam-brand sardines in extra virgin olive oil the other day and felt a weird “thunk” as I turned it over. I’ve handled enough of these cans now to know when something feels off. Opening it, I discovered only two fish instead of the usual three. That sensation was them loosely rolling around in the oil. It wasn’t like these were two large ones and there wasn’t room — someone on the packing line simply neglected to fill the available space and closed it up. At first I was incensed, and then I tried to let it go. We all deserve to make mistakes, and some sardines should get to enjoy a little more personal space. Be good to yourselves, and I’ll see you next week.

  • DataDeck

    DataDeck

    Introducing the DataDeck SG-01.

    Turn on, tune in, and nerd out at datadeck.app.

    Singapore generates (and publishes) an extraordinary amount of data about itself — temperatures, taxi coordinates, dengue clusters, carpark availability, ticket sales at major attractions. Numbers that civil servants read in spreadsheets and the rest of us ignore entirely. The DataDeck asks, “but what does it sound like?”

    Each Data Cassette draws live government feeds from data.gov.sg and renders them as distinct genres. There are ten cassettes in all, each with their own acoustic logic and ways of interpreting the city.

    The Climate cassette pulls real-time NEA temperature and humidity readings across 12 geographic sectors and converts them into lo-fi hip-hop — with chords deepening as humidity climbs, and the scale drifting toward Lydian as the heat rises. The Transport cassette tracks unoccupied taxis plying the streets and generates a relentless 303-style midnight techno. HDB carparks become polyrhythmic Afrobeat, and the movements of the stock exchange drive a satisfying hip-hop groove. Get money y’all! Check out the sound of visitor arrivals during the COVID years: like musical crickets.

    The controls? Three knobs shape density, tempo, and atmosphere. A mix fader redistributes the instrument balance. AUTO mode hands navigation back to the machine. There’s a user manual built in, should you get lost.

    It’s a music player with no music files. It’s a data dashboard you can close your eyes to. It’s Singapore, rendered in sound. Put your headphones on, and press play.

    Pro tip: If you really love DataDeck, you can save it to your phone’s Home Screen, which gets you a nice icon and a full-screen mode that shows the whole device at once without distractions.


    Disclaimer: I made this with the help of Gemini 3.1 Pro because I’m just an old designer who hasn’t coded stuff since GeoCities. I take no responsibility for any damage you cause yourself or others with this. Thank you.

    Related blog post: Week 13.26

  • Week 12.26

    Week 12.26

    Another busy week, and I’ve been like a caffeinated creature hunched over its keyboard with bloodshot eyes. You may notice I’ve updated the navigation bar on this site to point to a dedicated page listing all my apps. This takes the place of a page that pointed to all my custom GPTs on ChatGPT (that never really took off, did it?) and before that, my NFT experiments. Those are still around, though!

    You may call it AI slop but I’ve generated key images for each of the apps on that page, which I like to think of as analogous to game box cover art, those evocative artistic representations that used to stretch truths to their breaking points, back in the days when games looked like Lego.

    Here they are, just so you can admire them.

    The latest for now is CommonVerse, my daily magnetic poetry app. Give it a go!


    I’m writing this paragraph on Thursday after another failed attempt to stop vibe coding and focus on other pursuits. So far I’ve mostly finished one project and started on another that I meant to leave aside until next week. What is this feeling? This need to actualize a new ability that I’ve always wanted but never had to worry about not having?

    Instead of being able to recognize that I’ve already accomplished a lot, and “taking the rest of the week off” to go watch movies or something, I’m sucked into continually iterating and improving upon these apps like I’m on a deadline. It’s that paradox (mentioned here last May) where new technologies don’t decrease our workloads but only make us busier instead.

    Productivitymaxxers will say this is fine. This is how it’s supposed to be: you can do more, so you work just as hard and get twice as much out of it. Why would you want to work half as much? And they’re not wrong — that’s the engine of progress. But it’s also how you end up making six apps in three weeks and treating it as some kind of baseline rather than a miracle. As predicted, my capability has grown but I got desensitized to the satisfaction.

    The discomfiting shock to the system as I struggle with this resetting of scale, and feeling addicted to realizing more ideas, is an adaptation crisis. Adapting to life at a new speed and learning to balance capability with sensibility. Astronauts and pilots have to train to handle G-forces, in which the G stands for gravitational. I’m suggesting that working with AI has its own G-force, where the G is gratification. You can suddenly manifest many of the things you can think of. That’s a very powerful impulse to get under control. How do you engage with life’s responsibilities, appointments, or your growling stomach, when there’s always just one more prompt and revision to make? After getting home from a few drinks on Friday night, I found myself on my laptop in bed after midnight, fighting with a procedural audio generation engine that wouldn’t trigger drum sounds for any obvious reason.

    The next night I did the same, staying up to 3:30 AM because I had some new ideas that just could not wait. My Apple Watch sleep score is in shambles. But App #6 is certainly shaping up to be my best work. I’m going to sit on it for a whole week and keep polishing, instead of putting it out and moving on to the next one. That’s my strategy for slowing this down — it’s all I’ve got.


    Over the weekend, I also attended an Apple Store photo walk activity on a sweltering afternoon (up to 36°C next week) with Cien and Peishan. I hadn’t done one of these in years, but always keep meaning to. This one was conducted by the staff at Apple Orchard, and was a walking tour of Emerald Hill — which in reality is just a tiny street off Orchard Road. I’ve been there dozens of times over the years, but never saw the details just sitting there in tiles, old paintwork, and ornamental doorframes. Going to a small area with the intention of taking photos, and giving it more time than you’d normally allocate, can be a really fun and creative exercise.

    There’s no reason one couldn’t do this themselves any time, anywhere, of course. But these free ‘Today At Apple’ sessions are a good excuse to get off the couch. The other two local stores have their own programs, and I might check them out someday: Apple Jewel Changi Airport looks at the indoor waterfall, and Apple Marina Bay Sands has a night photography focus.

    Another nice touch is that they’ll lend you an iPhone 17 or 17 Pro if you don’t have one, and they’re incredibly relaxed about handing them out. No paperwork to fill out or deposits to pay. That’s the great thing about Find My protection, I guess. A comment was made that in the UK, those phones would disappear the instant the group left the store — even if just for parts. But they must do these sessions worldwide, so I’d love to know how it’s dealt with.