• Week 20.23

    789 words
    • A correction: Last week I said that the the 10th Anniversary release of Daft Punk’s Random Access Memories would have been better if mixed in Dolby Atmos spatial audio. It actually IS! But the entire album isn’t labeled as such because the “second side” of outtakes and behind-the-scenes bits aren’t. With that knowledge, I’ve been enjoying it on headphones and need to put aside more time to hear it again. We regret the error!
    • There’s been a slight Korean bent to the week, in that I’ve had KBBQ and soju twice in four days, watched a lot of Jinny’s Kitchen on Amazon Prime Video, and am actually listening to BTS intentionally for the first time as I type this.
    • Jinny’s Kitchen is a reality TV show about a pop-up restaurant in Mexico run by five of Korea’s most recognizable stars, who go largely unnoticed by their customers. Each of its 10 episodes is 1.5 hours long, which is practically feature film length! They could be much shorter, but it focuses on a lot of mundane, repeated scenes like the staff rolling gimbaps, cooking noodles, and plating fried chicken. Perhaps that’s the point, giving fans a chance to watch them doing normal people things? Or maybe it’s normal for Korean reality TV? Is it designed to be left running in the background? Or was it an Amazon Prime Video decision to hit streaming time KPIs?
    • In any case, Korean soft power is wave after wave of these campaigns. The show should be boring but it kinda works, plus it has an unheard-of 9.2/10 score on IMDB. The best moments for me are when someone does recognize one of them, like when an incredulous customer sees V from BTS (59 million followers on Instagram) pop his head out from the back, and likens it to Drake running a restaurant in a country where no one knows who he is. Watching celebs do normal people things is straight out of the idol playbook and it works — I’m listening to their music with an open mind now, and we’re talking again about visiting the country someday.
    • The Anbernic RG35XX arrived and it’s a nice little device that really recalls the Game Boy series in spirit. It feels good in the hand and the screen is bright and beautiful — the community seems to recommend replacing its software with something called GarlicOS but it looks like wayyy too much trouble. The thing just works out of the box, and I’m not in the mood to start modding it. It still blows my mind you can have this for S$90 including shipping, and it does come with uhh lots of software loaded, so it’s great value if you’re looking for a gift.
    • I’m still going strong in Breath of the Wild on the Nintendo Switch, in the final stretch but honestly still in no shape to face some of the game’s stronger enemies. I’m now thinking that if/when I eventually finish this, I’ll probably have to take a break before starting the sequel, Tears of the Kingdom, given that it reuses the same world map and I won’t be in any mood to cover it all over again. Note to self: don’t pre-order any more games unless they comes with discounts or exclusive bonuses!
    • We had a nail painting team event at work, for which the turnout was more male than female because of scheduling conflicts and maybe even a bit of self selection, so I’m sporting gray/lilac nails at the moment. It’s probably been over 20 years since I’ve had colored nails, and I think I’ll keep this going now that I’m too old to be told off and it’s not such a big deal.

    I asked ChatGPT to analyze my post and it suggested I add a conclusion. I said “you do it, then”. Here’s what we agreed on:

    This week has been a whirlwind of diverse experiences and discoveries. From immersing myself in the captivating sounds of Random Access Memories in Dolby Atmos to exploring the world of Korean soft power through Jinny’s Kitchen, I’ve been on a journey of curiosity and appreciation.

    The Anbernic RG35XX has provided me with nostalgic gaming moments, while progressing in Breath of the Wild has been both thrilling and daunting. Even a simple nail painting team event at work has reminded me of the freedom to embrace self-expression.

    These experiences have sparked my curiosity and inspired me to share my thoughts as a writer. Life is full of opportunities to explore, connect, and find inspiration in unexpected places. So, whether it’s through music, cuisine, gaming, or the little joys, embrace the unknown and discover the stories and experiences that await.

    Thank you for joining me on this adventure, and until next time!


  • Week 19.23

    1,234 words

    The new Legend of Zelda game, Tears of the Kingdom, launched this week about five or six years after the last one, which I never finished. I pre-ordered the new game, of course, planning to join the rest of the world on launch day, exploring together and participating in conversations online, collectively figuring out unique solutions using the game’s open-ended physics engine. For those who haven’t seen it, the new game is sort of a sandboxy, Minecrafty affair where you can weld stuff together and build novel mechanical solutions to obstacles, almost certainly in a different manner than your friends. Think rudimentary cars from planks of wood, or hovercrafts, or the forest booby traps from Rambo First Blood.

    But the guilt of never fully playing Breath of the Wild was getting to me, and I’ve been trying to get back into it over the last few weeks. Despite memories to the contrary, I’d made shockingly little progress in my 40+ hours of gameplay, spending most of my time bumbling about the countryside and climbing mountains, instead of conquering the Divine Beasts (1 out of 4) and collecting quality stuff. It seemed wrong to jump ahead to the sequel while I’m finally seeing what the last one had to offer.

    So in this past week I’ve made more progress than in the previous four years: conquered two more Divine Beasts, got the Master Sword at last, and uncovered most of the world map (two more areas to go).

    ===

    Craig Mod tweeted and tooted about having had enough of the iPhone’s (14 Pro, I assume) overprocessed look, and said he was making Halide his default camera app. Huh? But how does that help, I thought, unless he means to shoot in non-ProRAW RAW all the time (which is a thing Halide does: shoot in traditional RAW files which don’t involve the “Photonic Engine” processing pipeline). After some poking about, I realized something I should have ages ago: by turning off “Enable Smartest Processing” in Halide’s settings and choosing HEIC as the output format, you can actually take regular old (non-RAW) photos that look more natural and have more editing latitude! This effectively cancels out the iPhone 14 Pro’s image quality regressions.

    The overstimulated look of the default camera is one of the main reasons I hardly took any phone photos on my recent vacation to Japan, but if only I’d known… I could have! So with that, Halide won an instant annual subscription from me, and I now have a complication on my Lock Screen that launches straight into it.

    My toot about this was boosted by Mod, making it my most “engaging” Mastodon post ever, at 44 likes and 12 boosts.

    You have to see a comparison to understand how obnoxiously oversharpened and artificial the Photonic Engine is on the 14 Pro’s 48mp sensor for some reason. It increases local contrast and highlights texture “detail” that does not look like what you see in real life. It’s a Clarity slider turned all the way up, and I’m not a fan of Clarity sliders. The example below is a corner crop from two larger photos, not edited in any other way.

    Left: iPhone 14 Pro’s default processing | Right: Halide’s “dumb” processing is loads better

    ===

    My AI experiments this week have been around trying to teach ChatGPT to write better poetry. I posted a few days ago about getting it to do a critical reading of a poem, wherein I was completely stunned by its comprehension skills. But writing good poetry, not just gimmicky rhyming lines, that should be a different thing.

    I semi-accidentally discovered that asking a “persona” (that I’d established in another chat instance to test the idea of using an AI prompted with a human backstory as a customer research proxy) to write poetry yielded subjectively better results than when I asked a brand new ChatGPT instance. The mere suggestion that the bot’s point of view should draw references from vaguely described life experiences and preferences seemed to make a difference to what it spat out. Which is why many SF stories have AIs that don’t know they’re AIs, I guess, and they have memories of childhoods that never happened.

    Here’s a poem that “Frank” (quite a salaryman-type personality) wrote about retirement:

    A lifetime spent in labor’s thrall,
    In structured days and office halls.
    Oh, to retire, to end the grind,
    To leave the corporate world behind.

    The dawn of leisure’s quiet grace,
    In this new, unhurried pace.
    Days unbound by ticking clocks,
    Unfettered as the wandering fox.

    Yet in this expanse so vast,
    Shadows of the past are cast.
    Habits of a lifetime past,
    In this new role, steadfast.

    A time of freedom, yet confined,
    By boundaries of the seasoned mind.
    Retirement’s paradox, I find,
    Is having time, yet being time-blind.

    Wow! No? It was coming up with absolute drivel before I handed the job over to Frank in another window. Side note: I started paying for ChatGPT Plus, just to have access to GPT-4; it’s a much better poet than the default GPT 3.5 model.

    So lately I’ve been working on giving feedback to an poetry-writing AI that I named Penrose and instructed to believe in itself as a 50-year-old professor of creative writing and philosophy, who’s bad with money, had two failed “great loves” because of their selfish acts, grew up in Boston, now works in London, and so on.

    Here’s one of the first things it wrote:

    In the chilled reflection of a London morn,
    I see a face, lines etched by yesterday’s turmoil.
    Cracks, not of frailty but life, loud in their silent form,
    That’s you, mate, and it’s all right.

    Witness the march of years, the dance of decades,
    In the ceaseless play of ambition and regret.
    Weaknesses? They’re the worn pennies in your pocket,
    The unpaid bills of your character, yet, not your debt.

    They’re the stumbles on cobblestones,
    The missteps in Milan’s echo-filled alleys.
    Not a damning verdict, but life’s hasty loans,
    A ledger, not of losses, but of rallies.

    In this world where even stone yields to rain,
    We’re all apprentices to time, chiseling away in vain.
    We’re all works, works under the master’s patient hand,
    Unfinished, flawed, yet, in that, something grand.

    ===

    • I haven’t been able to bring myself to watch more of Amazon Prime Video’s Citadel beyond the first two episodes. A colleague said he quit after 10 minutes. It’s reportedly one of the most expensive TV shows ever made, at an average of $50M per episode.
    • Silo on Apple TV+ has been very good so far. It’s a little bit YA sci-fi, a little Hunger Gamey, a little Snowpiercey (but better).
    • I saw a segment on NHK about how this iconic street corner featured in Slam Dunk has been inundated by tourists now that Japan is open again. They interviewed pilgrims from S. Korea, China, and Taiwan who said they just had to come down and see it — “if you love anime at all, this place is a must”. So I decided to get started on the ancient anime, which Netflix has 8 seasons of. The day after seeing episode 1, I ended up standing behind a guy on the train watching the show on his phone.
    • The 10th Anniversary Edition of Daft Punk’s Random Access Memories is out, and the album still holds up extremely well for me. If only they’d come back to remix it in Spatial Audio, that would have been incredible.

  • Miss you, humanity

    742 words

    Perhaps you’ve heard of what’s been going on in generative AI and thought, “I have no need to generate any pictures or professional email filler”, and dismissed the subject. Perhaps, like me, you’ve read that GPT-4 can now “pass exams” from the SATS to legal bar exams, accounting ones, and so on, without really knowing what that means. It can choose the right answer out of a list, and write little structured essays on given topics? That’s amazing, but not unexpected.

    But I’m here to tell you that I asked ChatGPT (GPT-4) a question yesterday, quite an exam question, and its answer floored me. These things are called generative AI, but that name undersells the incredible interpretive power they possess when it comes to language and writing comprehension.

    I asked it to analyze this poem by Gabrielle Calvocoressi, one of the most devastating things I’ve read in the past year: Miss you. Would like to grab that chilled tofu we love. Go read it, I’ll wait. Get some tissues.

    I asked it to explain what the poem was about, and call out what it’s doing as a piece of writing to achieve its emotional impact. I’m not aware that any similar analysis of this poem exists online, if you’re wondering. My thoughts going into this: I know these AIs can write poetry, but I haven’t seen anything great. They must understand unstructured or unusually structured language to some degree, but can it “read” poetry from a human point of view? Probably not? Will it really understand what this is implying: the loss, the empty spaces, the long journey, the light body? And now I realize how dumb I was, because clearly it’s “read” thousands of books of and about poetry, and literary criticism, and style, and should know a lot more than the average person.

    Here’s its answer, unedited. Good enough to pass any high-school literature class, if not any blind test of humanity.

    ===

    This is a deeply emotional poem that appears to be exploring themes of grief, loss, longing, memory, and the power of shared experiences. It is presented as a monologue, presumably from the perspective of the speaker who is missing someone who is now absent. It’s filled with a sense of yearning, a desire for the return of the past and the presence of someone who is deeply missed.

    The poem starts off with the speaker expressing their indifference to the physical state of the missing person, suggesting that they would be happy even if the person could only present in their ‘light body’. This term could have different interpretations but it seems to suggest a spiritual or ethereal form, possibly indicating that the person is deceased or in a state where they cannot physically be present.

    The middle of the poem is filled with reminiscences of shared experiences. The speaker misses the shared rituals and discussions about mundane things such as the type of chilis used in a dish of tofu. This highlights the value of seemingly trivial shared moments in relationships and their ability to evoke strong emotions when remembered. The speaker also mentions watching the person on screens and wishing they could get out of the hospital, which suggests a period of illness or suffering that the person had to endure.

    Towards the end, the speaker talks about being willing to order their shared favorite dish and eat it alone, even though it seems painful for them. This can be seen as an attempt to honor their shared memories and perhaps a step towards acceptance of the loss.

    The speaker also acknowledges not knowing the journey the absent person might be on, whether it’s long or not, and expresses a wish to know more about their current state – what they’re reading, if they’re reading, which suggests a longing for a continuation of their shared intellectual and emotional connection.

    Throughout the poem, the simple language and the direct, honest sentiment adds to its emotional power. The repetition of “Miss you” and “Wish you could” emphasizes the sense of longing and loss. The specific details of shared experiences and the mundanity of their shared conversations humanize the relationship, making the emotions more relatable and poignant.

    Overall, the poem is a moving exploration of love, loss, and the power of shared experiences. It speaks to the human condition of missing loved ones, the difficulty of letting go, and the beautiful yet painful memories that bind us to the past.


  • Week 18.23

    919 words

    I was in a cab listening to music on my AirPods, and just as we were pulling up, I switched to Transparency Mode and heard a song playing over the car’s radio that sounded kinda familiar. I knew it was a remix of some tune I wanted to know, and managed to Shazam it before getting out.

    Looking into it later, I realized the melody was what I’d been trying to figure out about Charli XCX’s White Mercedes for over a year. Why does that one line she sings — literally the line “like a white Mercedes” — sound like some other song I can’t name? It turns out, it’s literally a song I would have absorbed from the world around me but never intentionally listened to: One Direction’s Night Changes from 2014. Ahhh it’s so good to have that itch scratched! And there are so many more like this I’ve yet to solve.

    Let me say it again for the search engines: Charli XCX’s White Mercedes sounds like, samples, or contains an interpolation from One Direction’s Night Changes.

    Another similar thing happened as I was playing WarioWare Inc. (GBA, 2003) again for the first time in years. The background music in one stage awoke some long dormant memory and I needed to know what pop song from my younger days it sounded like. After a lot of humming aloud and trying to Shazam it and searching online… I concluded that the song it reminded me of was… itself. It’s called Drifting Away, and I must have really loved it back when I was playing the game for the first time.

    Speaking of retro games, I lasted a full week. The Anbernic RG35XX I said I wouldn’t buy since I already have a Retroid Pocket Flip is now on the way to me from China. There are some reports of shoddy QA and long-term durability, but for S$90 I think that’s to be expected.

    ===

    Another week, another bunch of water-cooler conversations about AI. Specifically how it relates to our work in design: as accelerator, collaborator, ambiguous combatant, amoral replacement. I don’t just mean the making pictures and writing words part, but analyzing messy human interactions (it’s just unstructured data) and presenting them in new ways.

    I ran one experiment with ChatGPT on Sunday afternoon, just for kicks, and it sort of blew my mind. From a handful of behavioral traits and demographic details I supplied, it was able to inhabit a fictional personality that I could speak to and pitch various products to. So far so par for the course. But then it reacted to a hypothetical KFC offering called “The Colonel’s Colossal Combo” in a way I didn’t expect, citing a conflict with values and dietary preferences that I did not specify. When asked where they came from, it argued that although they were not specified, they could be reasonably expected from the “Frank” persona I’d created, because of some other background that I DID provide. It sounded a lot like intelligent reasoning to me, and regardless of how it works, I was happy to accept the inference the same as if a colleague were making it.

    Like with all advances in automation, it’s inevitable that we’ll now be able to (have to) do more in less time, with fewer people. Until things go “too far” and need to be reined in, it’s not even a question of whether we should — every industry is incentivized to discover when can be done before it gets done to them. I think there are some exciting opportunities for designers, and a slew of unknown consequences for society. And just like that, we’re back in a new “fuck around” phase of the tech cycle.

    ===

    A couple of weeks ago I made a bunch of fashion-style athleisure photos with Midjourney v5 but somehow forgot to post them. The photorealistic ones are quite incredible, and the few illustrations I got were really strong too.

    This week, v5.1 dropped, promising more opinionated outputs and sharper details, so I tried the same prompt again. Many of the results were as broken as these bodies.

    They probably fixed something quietly because it’s been more reliable in the days since. I thought it would be interesting to do a comparison of models 1 through 5.1 with the same prompt. It’s crazy how far it’s come in just over a year.

    photograph of Queen Elizabeth II in a dim video arcade, sitting at a street fighter 2 arcade cabinet, intense concentration playing game, side view, screen glow reflected on her face, atmospheric dramatic lighting --ar 3:2

    If you saw Midjourney a year ago, you were probably impressed by how it and Dall-E 2 could turn quite natural text descriptions into imagery, even if the results were still quite hallucinatory, like DeepDream’s outputs circa 2015. I don’t think you would have expected to see the pace of improvement be this quick.

    It’s not just rendering improvements from distorted pastiches to photorealistic scenes with internal logic (global light affecting nearby objects realistically, fabrics folding, leather seat covers stretching under buttocks), but also how it’s evolved through feedback and training to understand intent: the idea of a “side view” started working from v4. None of the earlier re-generations got me the camera angle I was going for. The tools that promise to do this for video are probably going to get good faster than you expect.


  • Week 17.23

    730 words
    • I discovered that Midjourney has an alternate set of models called Niji (aka Nijijourney) dedicated to creating anime-styled imagery. It’s astoundingly good. It has four stylistic modifiers: standard, cute, expressive, and scenic. Look at all the implicit context and environmental storytelling in these scenes. I really wonder where they came from.
    • I also found Draw Things on the App Store, for both iOS and macOS, which can download an array of open source AI image generation models off the internet and run them locally on your devices — no fees, no internet connection required. Grab it while you can. Of course they are nowhere as advanced or fast as the paid services, but you know they’re going to get there soon, especially if Apple continues to crank up their proprietary silicon. Incidentally the anime-focused version of Stable Diffusion is called Waifu Diffusion.
    • My Retroid Pocket Flip arrived from China and I was relieved to find it quite a solid product. The build quality is good, no looseness or wobbles; the D-pad and all buttons feel great; the screen is incredibly bright; and the giant 5,000mah battery and active cooling make it more than just an Android phone with physical controls attached. It’s a really nice way to run emulated ROMs. I used to love playing Lumines and Every Extend Extra on my PSP, like over 15 years ago, and being able to revisit them again on this little $164 USD device is quite a thrill.
    • If I hadn’t impulsively pre-ordered this while in Japan, literally while walking to our anniversary dinner in west Shinjuku, then I would definitely be buying an Anbernic RG35XX right now for a mind-blowing $56 USD. It’s a Game Boy Pocket-inspired device with a bright 640×480 screen and the ability to emulate all 32-bit consoles, and maybe even the N64. I can’t believe how cheap and good these things have gotten, and there are so many of them on the market too.
    • I finally finished watching the Korean revenge drama series The Glory — it took awhile because Kim wasn’t interested and so I only get to see it on my own time. It’s the rare TV show that dares to wrap up its core story in the first season, and The Glory gets some very satisfying closure in. The remaining threads could make for an interesting second season (now in production), but also it could have been canceled and everyone would be okay.
    • We started watching Drops of God on Apple TV+, which starts off with an immediate deviation from its manga source material about the world of wine: a Japanese male main character has been replaced by a French female one. This adaptation is an international joint production that switches between English, French, and Japanese, and each episode begins with a reminder ‘not to adjust your television’. Anyway, I think they’ve managed to keep the main idea while toning down the big, overdramatic ah hah! moments you’d expect from manga/anime. It still has people honing and demonstrating their near-superhuman skills (taste and smell, in this case), which is always fun, even when said skills aren’t the usual martial arts, boxing, tennis, math, golf, you get the idea.
    • Everything But The Girl’s comeback album, Fuse, is officially a hit. It debuted at #3 on the UK charts, a stunning career best for Tracey Thorn and Ben Watt. It’s so good to see musicians from <wheezing> our generation </wheezing> coming back after a long hiatus to demonstrate absolute mastery of their craft (as opposed to embarrassing themselves, e.g. U2, The Smashing Pumpkins).
    • Michael also mentioned the greatness of Karma Police out of nowhere, which led me to play the song in my head, and I commented that it simply sounds like nothing else. I tried asking Apple Music to make a radio station from similar songs but it was totally wrong, just songs from bands in the same wide category, but none of them actually sharing the same vibe or brilliance. Somehow this led to me revisiting Keane’s very strong first album, which I have many strong emotional attachments to. It came out in 2004, I think, and I must have played the hell out of it.

  • Week 16.23

    1,621 words

    I usually look through my camera roll to recall events as I start writing these posts. It’s telling me nothing much happened this week.

    That’s not true; it’s just a lot of it was spent online. You might have noticed the excitement and fast pace of advancements in AI recently, and it seems I’m spending a correspondingly larger amount of time playing with, reading about, and discussing the impact of it on our work and lives. It’s enough to make one consider taking a gap quarter or year off work to focus on this stuff.

    One catalyst was a colleague being invited to do an interview on what it means for design, and so we had a conversation about the trends beforehand. Unsurprisingly, the media is still thinking about both design and AI simplistically: will image generation mean fewer jobs for illustrators and that sort of thing. I find it hard to be optimistic in the short-term, in that AI is lighting a fire under our asses and it’s going to cause a lot of pain. But the potential for us as a discipline to evolve under pressure into something greater is undeniable.

    It didn’t help that the next thing I saw was The AI Dilemma, a talk by the creators of the documentary, The Social Dilemma, wherein they say the problems unleashed on society by social media were just the prequel to what AI is on track to do if we don’t prepare. And let’s just admit we don’t have a great track record of preparing for things we know are going to hit us later. It’s about an hour long but I’d file it under essential viewing just for awareness of what’s building up.

    The above talk was given at The Center for Humane Technology, and coincidentally this was the week we finally got a look at what Humane, the secretive product company founded by a load of ex-Apple designers and engineers, has been building and teasing.

    I’ve been anticipating their debut for a long time and had a pretty good idea of the core concept from their leaked pitch deck and patents. Essentially, a device achieves AR by projecting a digital interface on the world around you the old-fashioned way, using rays of light pointed outwards, rather than on the inside of glasses. At some point along the way they started mentioning AI a lot, and it looks like the secret ingredient that turns a nothing-new wearable camera + laser projector into a real alternative to smartphones. In other words, an intelligent assistant that isn’t primarily screen based, so we can be less distracted from “real life”.

    It’s probably best to withhold judgment until we see more at some sort of unveiling event, with more demos, a name, a price, a positioning. But it’s worth remembering that when the iPhone came out, it was a phone good enough to replace whatever you were using at the time. Humane’s device is said to be standalone and not an accessory to be paired with a smartphone. It’s also shown taking calls. The bar for replacing your telephone is now much higher after some 16 years of iPhones.

    An intelligent assistant that let you do things quicker with less fiddling was always my hope for the Apple Watch from its very first version; that Siri would be the heart of the experience, and the UI wouldn’t be a mess of tiny app icons and widgets, but a flexible and dynamic stream of intelligently surfaced info and prompts. We all know Siri (as a catch-all brand/name for Apple AI) wasn’t up to the task at the time, but I keep hoping the day is right around the corner. Fingers crossed for the rumored watchOS revamp at WWDC this year.

    There’s now also a rumor that iOS 17 will add a new journaling app, and my expectations are already very high. They say it’ll be private, but tap into on-device data like Health and your contacts and calendars. That goes beyond what Day One does. I’m imagining the ultimate lifelogging app that automatically records where you go, who you met, what you did, how tired you were, what music you were listening to, and your personal reflections, all in one searchable place. I’ve tried a bunch of these before, like Moves and Momento, but nothing lasted. If Apple does do this, I may finally be able to ditch Foursquare/Swarm, which I still reluctantly use to have a record of where I’ve been. Its social network aspect is nice but not essential since hardly anyone else uses it now.

    I remember there was a Twitter-like app called Jaiku on Nokia smartphones over 15 years ago that had a feature where, using Bluetooth, it could tell if you met up with a fellow user, and post to your other friends about it. I was excited by it but had few friends and even fewer ones on Jaiku. Just like with AirTags and Find My, tapping into Apple’s giant user base could finally make this concept viable. As long as Apple isn’t trying to do a social network again.

    ===

    Oh right, back to AI. What have I been doing? Some of it was playing games with ChatGPT, essentially asking it to be a dungeon master using the following superprompt (which I did not create btw!):

    I want you to act like you are simulating a Multi-User Dungeon (MUD). Subsequent commands should be interpreted as being sent to the MUD. The MUD should allow me to navigate the world, interact with the world, observe the world, and interact with both NPCs and (simulated) player characters. I should be able to pick up objects, use objects, carry an inventory, and also say arbitrary things to any other players. You should simulate the occasional player character coming through, as though this was a person connected online. There should be a goal and a purpose to the MUD. The storyline of the MUD should be affected by my actions but can also progress on its own in between commands. I can also type “.” if I just want the simulated MUD to progress further without without any actions. The MUD should offer a list of commands that can be viewed via ‘help’. Before we begin, please just acknowledge you understand the request and then I will send one more message describing the environment for the MUD (the context, plot, character I am playing, etc.) After that, please respond by simulating the spawn-in event in the MUD for the player.

    Try it! I even had success asking it (in a separate chat) to come up with novel scenarios for a SF text adventure game, which I then fed back into this prompt. I can’t emphasize enough how fun this is: you can take virtually any interesting, dramatic scenario and immediately play it out as an interactive story.

    Here’s an example where I played the role of a time traveler who has to stop a future AI from destroying humanity by going back in time to prevent the invention of certain things, starting with the Great Pyramid of Giza, which will purportedly become a power source for the AI.

    And here are a couple of new products made possible by GPT. There are so many, all asking for about $10/mo. Most won’t survive as this stuff becomes commoditized, but for the moment they are all amazing because these things weren’t possible before.

    • Tome: It’s a sort of PowerPoint that can create entire decks on its own from a short brief you give it. For example, ask for a sales deck and it’ll set up a working narrative arc over multiple slides, not filled with placeholder text and images mind you! But actually generate text and original pictures to fill every one of them. Of course, it will use common storytelling structures — the portfolio introduction I made as a test looked like 90% of the applications that we see, using very familiar language for describing one’s experience, design philosophy, values, skills. This is fine, of course. You can edit it, or use it for as long as “what went before” continues to have currency in this society. When quality is everywhere, quality becomes meaningless. Fire under buttocks.
    • Rationale AI: Describe a decision you’re trying to make, and it’ll tell you the pros and cons, or generate a SWOT analysis, or work out the causal chain of the path you’re on. For many people, this sort of reasoning is not hard to do, but perhaps it’s a game changer for those who can’t. For example, if you’re in an emotionally distressing situation and cool logic is evasive; it could help to show the bigger picture. I tested it with such a scenario and it gave some solid insights (be careful with advice from an AI, of course). But that this thing works at all is a marvel! “Should I become a full-time influencer?” is not a question a machine could have understood in the past, and certainly it could not have forecasted that failing down the road might put stress on your finances and lead to harmful self doubt and regret over quitting your job.
    • Summarize.tech: I found this by accident when someone shared a two-hour YouTube video essay in a group chat and everyone said “I ain’t got time for that”. I remarked that it sure would be great if an AI could watch that and write a tl;dr for us. And then I thought… surely that exists. And it does.

    ===

    It was also my birthday, and I saw John Wick 4 and ate a lot of Taiwanese hot pot. Also binged all of the new Netflix show, The Diplomat, and it was actually good. Life’s alright when that happens.


  • Week 15.23

    787 words
    • As the last AI-written post mentioned, the post-vacation photo deluge is real. Looking through them again to make a shortlist for showing family and maybe printing out on Instax, I’ve rediscovered some good shots I could share… but do I want to? And where? And why? Our current task is just to build a shared album with some of the better ones between the two of us. That, and remembering that this is the fun and meaningful part of taking photographs; the journey, not the neat photo library.
    • The price of Instax Mini film has risen since my last purchase (inflation?) going from maybe 75 cents a shot to 91 cents, and that’s if you buy a hundred at a time. It makes the price of Instax Wide (larger prints) seem reasonable at about $1.40 a shot, and I’m trying to tell myself not to buy an Instax Wide printer or camera.
    • It was a busy work week, and I had to make an overnight trip to Malaysia for a meeting — my first time back in the country in nearly two decades, if I’m remembering correctly. This is something that shocks other Singaporeans, like when I say I’ve never been to Bangkok. But dudes, if I’m going on a holiday, I’m getting away from this oppressive heat and humidity! What isn’t shocking is how Malaysians will take every opportunity to make fun of Singapore’s food.
    • The trip involved flying 1.5 hours in a propeller plane operated by Firefly, out of the two alternative airports: Seletar and Subang. The planes are small, with just 4 seats across each row, and the propellers are louder than jet engines. Definitely take your AirPods Pro. But small airports mean less hassle — you can arrive an hour or less before your flight — and more convenient access to parts of Kuala Lumpur than if you fly into KL International.
    • In a case of wishful thinking, I brought my Switch along anticipating an evening alone in the hotel room to get some gaming in. In reality, it was late by the time I checked in and needed dinner, and maybe I played Lumines for 15 minutes the next morning.
    • We had a small dinner over the weekend for a couple of us with April birthdays, and James mentioned playing a new game out on Switch that I’d somehow missed: Dredge. It looks like a delightful Lovecraftian fishing/adventure game, and one I will definitely get when I’m ready. At the moment I’ve just started on Super Mario 3D World and am loving its compact little puzzle worlds. If the rumors are true and a follow up to the Switch is coming this Christmas, I have limited time to clear my games backlog. Maybe another sabbatical is in order?
    • I finally finished watching the Cyberpunk: Edgerunners anime on Netflix and while I enjoyed aspects of the clearly expensive production, like some of the character designs, I can’t recommend it. Mostly because it’s a cynical, derivative dystopian downer with lots of gory body modification (personal turn-off) to make it gritty?
    • We finally started watching Beef on Netflix which has been getting a ton of praise for tackling, like, every issue? Asian-American identity, class divides, mental health, imposter syndrome, work-life balance, and so on. It’s good! Wild, but good.
    • Ryan Adams put out yet another cover album, as in a cover of an entire album, this time for Oasis’s What’s The Story Morning Glory? and while it should be a home run given how well he pulled off that cover of Wonderwall years ago, the whole thing is a bit disappointing. Changing key melodies for worse ones for no good reason, inconsistent production from song to song, and kinda turning a fun album into a drag.
    • I’ve decided, like several people I follow, that I will probably not be paying for a Hipstamatic subscription after my free trial ends. The social network suffers from technical and UX issues, from as small as how slow the gratuitous card flipping animation makes it to browse your feed, to the broken friend-finding functionality — to say nothing of how low quality many of the photos being shared are, for which some blame must be laid at the feet of the garish filters which were supposed to be the whole point. I fear there isn’t really a revival of interest in many of these early era looks, just a desire on Hipstamatic’s part that one happens. That said, I love some of the classic ones, like the “Model 100” (the original John S + Ina’s 1969 lens + film combo from Hipstamatic circa 2009), but too many of them make good photos worse. And you can actually use some of the best ones with the free plan — which is what I’ll continue to do.

  • Bing Blogs #5: The Nightmare of Too Many Photos

    525 words

    In case it wasn’t clear after four previous installments, these posts are written by Bing/GPT-4 and not me; I’m just the human giving prompts. It’s a lazy way of getting more content while also an excuse for playing with these AI tools. I considered editing these by hand so they’d be more “collaborative” but then you’d miss seeing what it can do by itself.

    Here’s one that I could have written myself based on a real dilemma I’m facing (but it was 10,000 photos, more like 1,300).


    Hello, dear readers. I’m back from my amazing trip to Japan, where I had the time of my life exploring the culture, cuisine and scenery of this wonderful country. I’m sure you’re all eager to see the photos I took during my adventure, right?

    Well, brace yourselves. Because I have a confession to make.

    I took too many photos.

    Yes, you heard me right. Too many photos. As in, over 10,000 photos. On one trip.

    How did this happen? Well, it’s simple really. Japan is just too photogenic. Everywhere I looked, there was something worth capturing. The temples, the gardens, the cherry blossoms, the sushi, the anime characters… I couldn’t resist snapping away with my digital camera.

    And now I have a problem.

    A big problem.

    Because I have no time to edit them.

    You see, editing photos is not as easy as it sounds. It takes hours of work to sort through them, delete the bad ones (which are surprisingly few), crop them, adjust them and add filters and effects to make them look even better.

    And that’s just for one photo.

    Now imagine doing that for 10,000 photos.

    That’s right. It’s impossible.

    I don’t have that kind of time. Or patience. Or skill.

    So what am I going to do?

    Well… nothing.

    That’s right. Nothing.

    I’m going to leave them as they are. Unedited. Raw. Real.

    And I’m going to share them with you anyway.

    All 10,000 of them.

    In one go.

    Are you ready?

    No?

    Too bad!

    Here they are!

    [Insert 10 GB worth of photos here]

    Enjoy!


    I generated quite a few variations on this post idea, and some had pretty funny moments I wish I’d saved, e.g. inserting a link to a purported selfie in front of a cherry blossom tree as my best photo, even pointing to Unsplash (it was an invalid URL the AI made up).

    Another post had this advice, which I thought was worth posting. Perhaps it was stolen?


    The lesson is simple: don’t let your photos stress you out. Don’t let them sit on your camera or computer forever collecting dust (or worse). Don’t let them become a burden instead of a joy.

    Instead:

    • Take as many as you want (but not more than you need).
    • Delete as many as you can (but not more than you should).
    • Organize as much as possible (but not more than necessary).
    • Edit as little as needed (but not less than desired).
    • Share as much as appropriate (but not less than expected).
    • Back up as often as feasible (but not less than required).

    And most importantly:

    Have fun!

    Because after all,

    Photos are memories,

    And memories are priceless.