An AI turned this week’s notes into poetry.
A Chronicle of Week Twenty-One
In a week where work did reign,
Much to tell there’s little gain,
Round it though, we gently dance,
For work’s secrets shan’t have chance.
An AI turned this week’s notes into poetry.
In a week where work did reign,
Much to tell there’s little gain,
Round it though, we gently dance,
For work’s secrets shan’t have chance.
It was one of those weeks where not an awful lot happened outside of work. I don’t talk about work here but let’s sort of circle it.
One thing I can say is that I started making a presentation deck about the use of generative AI and GPT in design, initially to share my findings so far but increasingly as an exercise in structuring my thoughts into anything at all useful.
A couple of things, on reflection: an AI assistant or collaborator poses significant risks for a lazy human in certain tasks since it tempts us to quickly accept its output without evaluating potential improvements. Assuming AI can do a job to 90% of a human’s rigor and quality, figuring out what the other 10% is without having done the same work yourself is quite the challenge. So the efficiency gains may not be as significant as you think, not until we figure out some smarter processes.
An example of what I mean: you can feed ChatGPT with notes from an interview conducted with a customer about their experiences and how a product fits into their lives. Supply enough interviews, and ChatGPT can do the work of summarizing them in aggregate, presenting key themes and areas worth looking into, like maybe everyone thinks the price is too high, but it’s because they don’t fully understand the value of what they’re buying.
It can create a bunch of frameworks to illustrate these findings, like personas and service blueprints. And it can even suggest solutions, like better marketing materials to explain this value to customers. The AI’s output might look pretty good, similar to what a team of human designers would (more slowly) produce, and a company might be tempted to make business decisions based on it. In fact, a team of human designers who haven’t read the interview notes themselves or thought deeply about it might also look at the AI’s work and say it’s good to go.
The level of confidence and finality inherent in these AI outputs is incredibly convincing. But if a human were to go through all the interviews, listening to the recordings perhaps, they might realize there was a missing element, a feeling subtly unsaid on the margins, that means some customers do see the extra quality, they just wish there was a cheaper version of the product that did less. Skimming through the finished research report crafted by the AI, you wouldn’t even begin to guess where in the sea of correct conclusions this exception could be hiding.
But there’s no question that this stuff is ready today to do some tasks like image editing, seen in Photoshop’s impressive beta release of a “Generative Fill” feature this week. I took a stock photo and doubled its height, and it was able to get the context of the scene and fill in the missing ceiling almost perfectly. That would have taken an experienced image compositor at least a few minutes, and anyone else way too much time. Just a couple of clicks now.


I also looked into what Adobe is building with its Sensei suite of AI marketing tools, and that dream of generating and sending personalized ads, as in a unique package of art and copy tailored to a single customer’s behavior, would seem to be already here. I’m not 100% sure how this works everywhere, but in the past, you’d still need copywriters and art people involved in the process after marketers had identified the “customer journeys” and content-goes-here templates. With the opportunities now being identified, advertising messages crafted, and email offers all sent with a single click by the same person, there’s hardly a crack in the door left for the traditional artists and copywriters to make their case. Yet, the quality is good enough to satisfy all but the most discerning of corporations.
You may observe that the two of the largest advertising platforms are already in this space.
What do you think about the current advancements in AI and their implications? Share your thoughts in the comments below.
(One more example: I asked ChatGPT to help suggest edits for this post, and it rewrote one of the above sentences to be better. I liked it, but on closer inspection, there was a glaring semantic error I had to fix myself. It also suggested the call to action above, to increase engagement. Talk to me!)
===
There seems to be yet another wave of Covid sweeping through the city, based on the fact that several people I know have come down with it, and every bus and train car I’ve been on this week had more people wearing masks, suggesting that they, too, know people who’ve come down with it.
Kim is going away for a couple of weeks, and I’m hoping she doesn’t run into it out there either; one of her colleagues caught it while traveling in the region a few days ago. I’m planning to stay home as much as I can during this time, and finishing as many video games as possible.
===
I asked ChatGPT to analyze my post and it suggested I add a conclusion. I said “you do it, then”. Here’s what we agreed on:
This week has been a whirlwind of diverse experiences and discoveries. From immersing myself in the captivating sounds of Random Access Memories in Dolby Atmos to exploring the world of Korean soft power through Jinny’s Kitchen, I’ve been on a journey of curiosity and appreciation.
The Anbernic RG35XX has provided me with nostalgic gaming moments, while progressing in Breath of the Wild has been both thrilling and daunting. Even a simple nail painting team event at work has reminded me of the freedom to embrace self-expression.
These experiences have sparked my curiosity and inspired me to share my thoughts as a writer. Life is full of opportunities to explore, connect, and find inspiration in unexpected places. So, whether it’s through music, cuisine, gaming, or the little joys, embrace the unknown and discover the stories and experiences that await.
Thank you for joining me on this adventure, and until next time!
I was in a cab listening to music on my AirPods, and just as we were pulling up, I switched to Transparency Mode and heard a song playing over the car’s radio that sounded kinda familiar. I knew it was a remix of some tune I wanted to know, and managed to Shazam it before getting out.
Looking into it later, I realized the melody was what I’d been trying to figure out about Charli XCX’s White Mercedes for over a year. Why does that one line she sings — literally the line “like a white Mercedes” — sound like some other song I can’t name? It turns out, it’s literally a song I would have absorbed from the world around me but never intentionally listened to: One Direction’s Night Changes from 2014. Ahhh it’s so good to have that itch scratched! And there are so many more like this I’ve yet to solve.
Let me say it again for the search engines: Charli XCX’s White Mercedes sounds like, samples, or contains an interpolation from One Direction’s Night Changes.
Another similar thing happened as I was playing WarioWare Inc. (GBA, 2003) again for the first time in years. The background music in one stage awoke some long dormant memory and I needed to know what pop song from my younger days it sounded like. After a lot of humming aloud and trying to Shazam it and searching online… I concluded that the song it reminded me of was… itself. It’s called Drifting Away, and I must have really loved it back when I was playing the game for the first time.
Speaking of retro games, I lasted a full week. The Anbernic RG35XX I said I wouldn’t buy since I already have a Retroid Pocket Flip is now on the way to me from China. There are some reports of shoddy QA and long-term durability, but for S$90 I think that’s to be expected.
===
Another week, another bunch of water-cooler conversations about AI. Specifically how it relates to our work in design: as accelerator, collaborator, ambiguous combatant, amoral replacement. I don’t just mean the making pictures and writing words part, but analyzing messy human interactions (it’s just unstructured data) and presenting them in new ways.
I ran one experiment with ChatGPT on Sunday afternoon, just for kicks, and it sort of blew my mind. From a handful of behavioral traits and demographic details I supplied, it was able to inhabit a fictional personality that I could speak to and pitch various products to. So far so par for the course. But then it reacted to a hypothetical KFC offering called “The Colonel’s Colossal Combo” in a way I didn’t expect, citing a conflict with values and dietary preferences that I did not specify. When asked where they came from, it argued that although they were not specified, they could be reasonably expected from the “Frank” persona I’d created, because of some other background that I DID provide. It sounded a lot like intelligent reasoning to me, and regardless of how it works, I was happy to accept the inference the same as if a colleague were making it.
Like with all advances in automation, it’s inevitable that we’ll now be able to (have to) do more in less time, with fewer people. Until things go “too far” and need to be reined in, it’s not even a question of whether we should — every industry is incentivized to discover when can be done before it gets done to them. I think there are some exciting opportunities for designers, and a slew of unknown consequences for society. And just like that, we’re back in a new “fuck around” phase of the tech cycle.
===
A couple of weeks ago I made a bunch of fashion-style athleisure photos with Midjourney v5 but somehow forgot to post them. The photorealistic ones are quite incredible, and the few illustrations I got were really strong too.






This week, v5.1 dropped, promising more opinionated outputs and sharper details, so I tried the same prompt again. Many of the results were as broken as these bodies.



They probably fixed something quietly because it’s been more reliable in the days since. I thought it would be interesting to do a comparison of models 1 through 5.1 with the same prompt. It’s crazy how far it’s come in just over a year.
photograph of Queen Elizabeth II in a dim video arcade, sitting at a street fighter 2 arcade cabinet, intense concentration playing game, side view, screen glow reflected on her face, atmospheric dramatic lighting --ar 3:2








If you saw Midjourney a year ago, you were probably impressed by how it and Dall-E 2 could turn quite natural text descriptions into imagery, even if the results were still quite hallucinatory, like DeepDream’s outputs circa 2015. I don’t think you would have expected to see the pace of improvement be this quick.
It’s not just rendering improvements from distorted pastiches to photorealistic scenes with internal logic (global light affecting nearby objects realistically, fabrics folding, leather seat covers stretching under buttocks), but also how it’s evolved through feedback and training to understand intent: the idea of a “side view” started working from v4. None of the earlier re-generations got me the camera angle I was going for. The tools that promise to do this for video are probably going to get good faster than you expect.








I usually look through my camera roll to recall events as I start writing these posts. It’s telling me nothing much happened this week.
That’s not true; it’s just a lot of it was spent online. You might have noticed the excitement and fast pace of advancements in AI recently, and it seems I’m spending a correspondingly larger amount of time playing with, reading about, and discussing the impact of it on our work and lives. It’s enough to make one consider taking a gap quarter or year off work to focus on this stuff.
One catalyst was a colleague being invited to do an interview on what it means for design, and so we had a conversation about the trends beforehand. Unsurprisingly, the media is still thinking about both design and AI simplistically: will image generation mean fewer jobs for illustrators and that sort of thing. I find it hard to be optimistic in the short-term, in that AI is lighting a fire under our asses and it’s going to cause a lot of pain. But the potential for us as a discipline to evolve under pressure into something greater is undeniable.
It didn’t help that the next thing I saw was The AI Dilemma, a talk by the creators of the documentary, The Social Dilemma, wherein they say the problems unleashed on society by social media were just the prequel to what AI is on track to do if we don’t prepare. And let’s just admit we don’t have a great track record of preparing for things we know are going to hit us later. It’s about an hour long but I’d file it under essential viewing just for awareness of what’s building up.
The above talk was given at The Center for Humane Technology, and coincidentally this was the week we finally got a look at what Humane, the secretive product company founded by a load of ex-Apple designers and engineers, has been building and teasing.
I’ve been anticipating their debut for a long time and had a pretty good idea of the core concept from their leaked pitch deck and patents. Essentially, a device achieves AR by projecting a digital interface on the world around you the old-fashioned way, using rays of light pointed outwards, rather than on the inside of glasses. At some point along the way they started mentioning AI a lot, and it looks like the secret ingredient that turns a nothing-new wearable camera + laser projector into a real alternative to smartphones. In other words, an intelligent assistant that isn’t primarily screen based, so we can be less distracted from “real life”.
It’s probably best to withhold judgment until we see more at some sort of unveiling event, with more demos, a name, a price, a positioning. But it’s worth remembering that when the iPhone came out, it was a phone good enough to replace whatever you were using at the time. Humane’s device is said to be standalone and not an accessory to be paired with a smartphone. It’s also shown taking calls. The bar for replacing your telephone is now much higher after some 16 years of iPhones.
An intelligent assistant that let you do things quicker with less fiddling was always my hope for the Apple Watch from its very first version; that Siri would be the heart of the experience, and the UI wouldn’t be a mess of tiny app icons and widgets, but a flexible and dynamic stream of intelligently surfaced info and prompts. We all know Siri (as a catch-all brand/name for Apple AI) wasn’t up to the task at the time, but I keep hoping the day is right around the corner. Fingers crossed for the rumored watchOS revamp at WWDC this year.
There’s now also a rumor that iOS 17 will add a new journaling app, and my expectations are already very high. They say it’ll be private, but tap into on-device data like Health and your contacts and calendars. That goes beyond what Day One does. I’m imagining the ultimate lifelogging app that automatically records where you go, who you met, what you did, how tired you were, what music you were listening to, and your personal reflections, all in one searchable place. I’ve tried a bunch of these before, like Moves and Momento, but nothing lasted. If Apple does do this, I may finally be able to ditch Foursquare/Swarm, which I still reluctantly use to have a record of where I’ve been. Its social network aspect is nice but not essential since hardly anyone else uses it now.
I remember there was a Twitter-like app called Jaiku on Nokia smartphones over 15 years ago that had a feature where, using Bluetooth, it could tell if you met up with a fellow user, and post to your other friends about it. I was excited by it but had few friends and even fewer ones on Jaiku. Just like with AirTags and Find My, tapping into Apple’s giant user base could finally make this concept viable. As long as Apple isn’t trying to do a social network again.
===
Oh right, back to AI. What have I been doing? Some of it was playing games with ChatGPT, essentially asking it to be a dungeon master using the following superprompt (which I did not create btw!):
I want you to act like you are simulating a Multi-User Dungeon (MUD). Subsequent commands should be interpreted as being sent to the MUD. The MUD should allow me to navigate the world, interact with the world, observe the world, and interact with both NPCs and (simulated) player characters. I should be able to pick up objects, use objects, carry an inventory, and also say arbitrary things to any other players. You should simulate the occasional player character coming through, as though this was a person connected online. There should be a goal and a purpose to the MUD. The storyline of the MUD should be affected by my actions but can also progress on its own in between commands. I can also type “.” if I just want the simulated MUD to progress further without without any actions. The MUD should offer a list of commands that can be viewed via ‘help’. Before we begin, please just acknowledge you understand the request and then I will send one more message describing the environment for the MUD (the context, plot, character I am playing, etc.) After that, please respond by simulating the spawn-in event in the MUD for the player.
Try it! I even had success asking it (in a separate chat) to come up with novel scenarios for a SF text adventure game, which I then fed back into this prompt. I can’t emphasize enough how fun this is: you can take virtually any interesting, dramatic scenario and immediately play it out as an interactive story.
Here’s an example where I played the role of a time traveler who has to stop a future AI from destroying humanity by going back in time to prevent the invention of certain things, starting with the Great Pyramid of Giza, which will purportedly become a power source for the AI.




And here are a couple of new products made possible by GPT. There are so many, all asking for about $10/mo. Most won’t survive as this stuff becomes commoditized, but for the moment they are all amazing because these things weren’t possible before.
===
It was also my birthday, and I saw John Wick 4 and ate a lot of Taiwanese hot pot. Also binged all of the new Netflix show, The Diplomat, and it was actually good. Life’s alright when that happens.
Ugh, the post-holiday period is the worst. I’ve struggled through the week, and it was only a short four-day work week because of the Easter/Good Friday holiday. I’m in the mood for another break now, and thankfully we have a week in Australia later this year to look forward to.
I started off Monday with a client video call in which I got frustrated enough by my bad lighting situation (sitting in front of blinds — either too much light, too little, or visible horizontal shadows across my face) to finally do something about it. During my aimless ambles down the aisles of Japan’s electronic superstores, I saw many shelves dedicated to remote work equipment, presumably a big sales driver for them over Covid-19, and considered bringing a ring light home. I didn’t, but I found good looking ones on Shopee and ended up with a rectangular soft LED panel on a tabletop stand for just S$27! It does five color temperatures, but I’m sticking with Daylight, and overall it’s been an awesome purchase I should have made ages ago. And it arrived in 24 hours.


No surprises, but I’ve taken far fewer photos since returning. I still open Hipstamatic regularly just to keep my streak going, and it’s forced me to try and snap something every day. That said, I wonder if this habit, and the product’s reboot, will last. As I was discussing with Michael, they needed to put some momentum behind the launch and sustain it with updates and quality posts in the global pool. But from how it looked in their updates, the founders were (also) on holiday in Japan on launch week? Perhaps they were there to boost some community events, but I looked at the Japan-only photo feed regularly and I was one of the most prolific posters. Not a great sign. They just released an update this weekend, at least, with a new Uji-inspired lens and film.
A new fun thing to do with Midjourney emerged this week: a /describe command which takes a photo you upload and has the system describe it back to you in the form of Midjourney prompts, which you can then submit to generate a “broken telephone” remix of your original image.


If you think computer vision/image recognition has gotten scarily good recently, you’d be right. AI is part of this chain somewhere, and look no further than this Memecam web app which blew my mind last night. Snap a photo of something, and it recognizes what the image contains, and uses GPT to create a joke and final meme, Impact font and all. It actually writes jokes about anything, instantly. That AI-generated Seinfeld stream could technically become good, viable (if not wholly original) comedy in the near future.
===
Hey, two quick moments of consumer ecstasy I need to share!

Last week I mentioned buying black tees from FamilyMart, and then got into a few brief discussions about fashion/luxury apparel this week, wherein I reflected that while I’m happy to pay high prices for technology and things crafted out of metal, I can’t feel that way about fabrics and leather. They wear down, so why not just embrace their replacement and buy cost-effective, expendable products from basic brands? Then the Twitter algorithm put a bit of trivia in front of me that the plain white tees worn by Carmy in The Bear got attention from viewers who wanted to buy them, and that they were actually pretty expensive ones made by Japanese brand, Whitesville.
So… if you know me, you may know where this is going. Yup, this is the guy who loved PCs, hated Macs, and now has a house full of Apple products. To be clear, I wasn’t suddenly curious about the idea of buying ostentatious Veblen t-shirts with designer logos, just… better ones that would hold up longer and not look as cheap. So I now have an order of basic black tees coming in from Mr Porter that cost 5x what I normally pay for them. Gulp. I’ll work out if this actually makes sense and let you know.
===
The Super Mario Bros. Movie is a fun trip in IMAX. We enjoyed it, and I’m looking forward to finally playing Super Mario 3D World + Bowser’s Fury and New Super Mario Bros. U Deluxe (I have to look up these names every single time) on my Switch soon.