Okay this one’s a bit childish but useful! Or at least well intentioned. You know how some names just lend themselves to being made fun of, and you’re like… “what were the parents thinking?” Or maybe your name is just fine, but you’re about to marry someone, or your parent is remarrying, and your last name might change into something problematic.
Name Guardian is here to solve this very niche problem. It will check your name for several “vulnerabilities”, namely does it sound like something rude? Does it have double meanings? Will your initials spell something unintended? Might different forms or contractions of your name have some cultural significance somewhere?
It even kinda works with Mandarin Chinese names, but it admittedly can’t do anything about dialect names. YMMV but it worked on all the English names I tested it on.
FilmNerd is your friendly movie buff for deep dives into cinema history, critiques, and all things film! 🎬
Here’s a fun chatbot for those times you want debate a film but your friends haven’t seen it or have had enough of your bullshit. It’s up for all sorts of questions and hypothetical arguments, and I’m learning a lot just talking to it.
Example: I asked for a film where a bolt of lightning was at the center of a major plot point and it said Back to the Future (of course!), and then we asked it whether that was a more worthy moment in lightning-centric film discourse than Thor, and it was able to provide compelling arguments both ways.
This week in artificial intelligence was a big one: Humane unveiled their highly-anticipated wearable, while OpenAI made strides with ChatGPT enhancements.
The Humane Ai Pin
A lot has already been said about the letdown that the Humane reveal was, mostly by people confused by the presentation style of the two ex-Apple employees who founded the company.
If you’ve seen Apple events and Humane’s 10-minute launch video, you’ll note the contrast in delivery and positioning. Apple tries to couch features and designs in real-life use cases, and show authentic enthusiasm for what they do to improve customers’ lives (Steve was unmatched at this). Humane kicked off with all the warmth of a freezer aisle, missing the chance to sell us on why their AI Pin wasn’t just another tech trinket in an already cluttered drawer. They puzzlingly started with how there are three colors available and it’ll come with extra batteries you can swap out, before even saying what the thing does! The rules of storytelling are quite well established, and why they chose to ignore them is a mystery.
A lot was also said about how two key facts in the video presentation, provided by the AI assistant so central to their product, turned out to be inaccurate. One was about the upcoming solar eclipse in 2024 (and Humane’s logo is an eclipse! How do you get this wrong?), and the other was an estimate of how much protein a handful of nuts has. It’s a stunning lack of attention to detail that this was not fact-checked in a prerecorded video.
Personally, I have been waiting for the past five years to see what this stealth startup was going to launch, and as the rumors and leaks came out, I was extremely excited to see an alternative vision for how we interact with computers and personal technology. What they showed did not actually stray from what we knew. An intelligent computer that sees what you see, is controlled by natural language, and is able to synthesize the world’s knowledge and project it onto your hand in response to queries is amazing!
The hardware looks good, channeling the iPhone 5’s design language to my eyes, and I’ll bet they had to pioneer new ideas in miniaturization and engineering to get it down to that size. I expected it to cost as much as an iPhone, but it’s only $699 USD, which feels astoundingly low. That’s not much more than what we used to pay for a large-storage iPod.
The disappointment is in their strategy. By positioning it as a replacement for your phone rather than an accessory, they’ve reduced the total addressable market to a few curious early adopters and people who want to address having a tech or screen addiction. The kind who intentionally buy featurephones in 2023. I think their anti-screen stance is interesting, but it doesn’t win over the critical mass necessary to scale and challenge norms.
The Ai Pin comes with its own phone line for messages and calls (for $24/mo), so it’s not going to be convenient to use this alongside your phone, and I would not give up my phone while this is still half-baked — I say this kindly, because even the iPhone launched half-baked in many ways. For many things that we have become accustomed to in life, there is no substitute for a high-definition Retina display capable of showing images, video, and detailed or private information when necessary.
Do I believe that Apple can one day get Siri to the level of competence that OpenAI has? I have to hope, because the Apple Watch is probably a better place for an AI assistant to live than in a magnetically attached square on my T-shirt. In any case, Humane seem to have taken a leaf out of their old employers’ playbook, and will be releasing this first version only in the US, and so whether or not I would buy one is a moot point.
OpenAI and GPTs
Speaking of OpenAI, it would seem that they’re still the team to beat when it comes to foundation models. The playing field is full of open-source alternatives now, including Lee Kai-Fu’s 01.ai and their Yi-series models, but as a do-it-all company offering dependable access to dependable AI, OpenAI seems unassailable.
They announced enhancements to their models, increasing context windows and speeds while halving prices for developers, and launched a new consumer-friendly product: customized instances of ChatGPT that work like dedicated apps, which they call “GPTs”. In effect, these are a version of Custom Instructions which were introduced earlier this year as a way to tell ChatGPT how to behave across all chats. But sometimes you’re a researcher at work and sometimes you want to have some dumb fun, thus I’m not sure they caught on.
So now GPTs let you specify (pre-prompt?) different contexts and neatly turn them into separate tools for different purposes. Importantly, you can now also upload knowledge in the form of files and documents for the agents’ reference in generating replies. This makes them more powerful and app-like, and normal people like me with no coding ability can create them by telling a bot what they want (in natural language, of course), or writing prompts directly. I recommend the latter, because chatting with the “Create” front-end tends to oversimplify your instructions over time and you risk losing a lot of detail about how you want it to work and interact with users.
So what does the launch of these GPTs mean? Well, for many of the developers who were riding the OpenAI wave and only used their APIs to build simplistic wrapper apps, it’s a sudden shift in the tide and they’re now forced to build things that aren’t reducible to mere prompts.
What we’ll soon see is a GPT gold rush. Brace yourself for a stampede of AI prospectors, each hunting for their piece of OpenAI’s bonanza — the company will be curating and offering GPTs in a “Store” and sharing revenue with creators. That’s a different model than their APIs where developers pay OpenAI for compute and charge users in turn. Here, users all pay OpenAI a flat fee for ChatGPT Plus and can use community-made GPTs all they want (within the rate limits).
Hear everyone talking about a viral GPT that makes it so easy to do X? When you want to try it out, you’ll see a call-to-action to sign up for ChatGPT Plus. This signals to me that launching GPTs is a strategy to drive paid account conversion, which begins the lock-in that OpenAI needs in order to make ChatGPT the new OS for services, not unlike how WeChat is the base layer that runs China, regardless of whether you use iOS or Android. Eventually you won’t even need to know about or choose the GPTs you use; the master ChatGPT system will call them as necessary. We may not be headed for a screen-less future, but we’ll probably see an app-less one.
My GPT projects
Of course I’m playing with this and making some of my own! Did you think I wouldn’t, given the ability to create AI things without coding?
I’ve got a list of ideas to work on, and so far I’ve acted on three of them, which are explained on this blog in separate posts.
✨ PixelGenius was my first, and contains the most complex prompt I’ve ever written. It started out as a tool to generate photo editing presets/filters that you can use on your own in any sufficiently advanced photo editing app with curves, H/S/L controls, and color grading options. You can just say “I want to achieve the look of Fujifilm Astia slide film” and it’ll tell you how to do that. But now it does more than just make presets, which you can find out about here. More details and examples in the blog post here.
SleepyTalesSleepyKills
😴 SleepyTales was the second, and I’m still amazed at how good it is. It’s designed for Voice Conversations mode (currently only in the mobile app), so you can get a realistic human voice reading you original (and also interactive, if desired) bedtime stories. These are never-ending, long, and absolutely boring tales with no real point, in drama-free settings, told in a cozy and peaceful manner. It’s the storytelling equivalent of watching paint dry, yet oddly mesmerizing. More on this and the next one here.
🥱 SleepyKills 🔪 was born from a hilarious misread — I told Cien about it and ‘mundane’ became ‘murder’. So if your bedtime stories of choice are usually true crime podcasts, then you’re in luck. This GPT agent will create an infinite number of dreary murder stories, but stripped of all suspense, mystery, and excitement. They’re about as exciting as real police work, not the flashy TV investigating sort. Again, I still can’t believe how cool it is to hear these being written and read in real time.
People have said the Voice Conversations feature is a game-changer for ChatGPT, but I didn’t really get it at first when using it for general queries. IMO, the killer app for it is storytelling. I’ve been using the voice called Sky for both the above bedtime stories apps, and it works well.
Films
I watched David Fincher’s new film The Killer in bed on my iPad, just like he would want me to. Even then, it was spectacular, a cinematic victory lap for both him and Michael Fassbender. It plays with genre conventions, expectations, and riffs off his own body of work. There are some great moments and a fantastic performance by Tilda Swinton. 4.5 stars.
Speaking of performances by English actors, I also watched Guy Ritchie’s Operation Fortune: Ruse de Guerre, which is both a terrible name and attempt at creating a new globetrotting spy/special ops team franchise. But, he has a certain touch even when making shit, and the film is a hell of a lot of fun, bringing out the best in Jason Statham (who tried to hold up The Expendables 4 and failed), as well as a villainous turn from Hugh Grant that — I shit you not — is easily a Top 10 career highlight for him. Jason Statham in the right hands is a very different animal than when he’s doing B material; I don’t know how to explain it. I actually gave it 4 stars on Letterboxd and won’t take it back.
Album of the week
REM’s Up received a 25th Anniversary Edition, with some tracks seemingly remastered and a whole second “disc” of an unreleased live performance they recorded on the set of the TV show Party of Five?! Sadly it is not a track-for-track live performance of the album, which would have been great. There’s no Dolby Atmos here either, so I’m just taking this as an opportunity to revisit this album.
I can still feel the gut punch from the day Bill Berry bowed out, post-aneurysm. I was afraid they might break up, and REM was absolutely my favorite band back then (maybe still), so when Up came out, I was hopeful for a new and long-lived chapter to begin. And yeah, it was a weird album, playing with new sounds and using drum machines — not unlike The Smashing Pumpkins’ Adore album after Jimmy Chamberlin left. But many songs were great, some even recognizably REM. The band kept going for a few more albums, each a new spin on an evolving sound. And in true style, they dropped the mic at just the right moment.
😴 SleepyTales: Spins long and boring stories to help you unwind and fall asleep. Designed for voice mode, turn it on and chill…
This is a GPT designed to be used with ChatGPT’s “Voice Conversations” mode (currently only in the mobile app) — although you can use it to generate text alone, it really shines when paired with one of their realistic voices. I currently prefer the one called Sky. Like it says in the description above, this GPT agent has been prompted to provide tension-free, inconsequential, meandering stories about anything you like. It reads them out in a slow, gentle manner, for quite awhile at a stretch.
So just turn on voice mode and pop your phone on the nightstand and listen to the most boring stories ever. Unfortunately, I’m unable to make it speak indefinitely without building an app, so it will occasionally stop and ask if it should keep going. You can say “yeah” or even “mmhmm”, and it will. Or you can give it some direction. Hint: just try and get it to make the story more exciting, I don’t think you’ll succeed!
And I suppose if you’ve nodded off and can’t tell it to continue, that’s a good thing? Nevertheless, I find its stories very good just for unwinding while still awake.
🥱 SleepyKills 🔪: A generative true crime podcast that couldn’t be more boring. Sleep tight!
While showing the former app to Cien, she misread “mundane” as “murder” and thought it generated boring true crime stories, to which I thought “WHY NOT!?”
And so SleepyKills was born, designed to emulate the language and style of a popular true crime podcast except… you might find it very hard to care? Firstly because the murder stories are completely generative and fictional, and secondly because they’re almost comically full of irrelevant details and lacking in any excitement or suspense. The AI podcaster often spends time on aspects of the case that no one else would want to know.
Check it out if that sounds like your kind of bedtime story.
Do you edit photos, use filters, or make your own presets? What if you had an AI tool to help create any look you asked for?
That’s ✨PixelGenius, my first “GPT” (a custom agent built on ChatGPT). It’s a photo editing expert that creates filters, suggests improvements, and helps you elevate your craft.
Describe a vibe and it’ll provide the settings to make a preset/filter.
Emulate a classic film stock!
Upload photos and get editing suggestions.
Reverse-engineer edited photos by providing a Before and After.
Learn editing techniques just by chatting naturally.
It’s designed to help beginners learn the art and color science of photo editing, while letting pros save time with great starting points. For every adjustment, it explains the intent so you learn how this stuff works.
It gives you standard adjustment values that you can plug into your favorite photo editing app like Darkroom, VSCO, Photomator, or Adobe Lightroom and save them as your own custom presets.
I prefer to learn by trying stuff out rather than watching videos or whatever, so when I first started using Lightroom, it was a messy process of trial and error that lasted years. ✨PixelGenius turns that into an interactive, guided experience. It’s like having a photo editing expert on demand, and you can even get into deep conversations about color theory and photographic history. All you need is a ChatGPT Plus account.
This involved writing one of the most comprehensive prompts I’ve done so far, so I’d be curious to know your thoughts after you give it a go!
Kim was away again for work most of the week. This effectively gave me the gift of freedom from our Beyond Deck addiction, cold turkey. So I played a little Super Mario Wonder, drank too much bourbon, watched Bitcoin YouTube, and lazed around unproductively on my iPhone.
And on the days I decided to work from the office, I was richly rewarded with elevator breakdowns, mall escalator maintenance, and other inconveniences that made me wish I’d stayed home. I don’t think I’ve mentioned my recent obsession with Luckin Coffee, the Chinese chain that’s been expanding nationwide, including an outlet in my office building. Bosses forcing a return to office take note, getting one of their iced coconut lattes in the morning is one of the reasons I head in. I was looking forward to my fix one of these cursed mornings but found the branch closed for upgrading.
Luckin’s model is fully app-dependent and most branches are just pickup counters, with no seating. You order on your phone, pay via card or Apple Pay, and pick it up by scanning a QR code. You don’t interact with staff, and they don’t handle any filthy cash. For comparison, you can do the same thing with Starbucks’s app, as well as order in person like a boomer. To get you past the friction of downloading the app and setting up an account, your first drink at Luckin is just $0.99. Subsequently, you’ll get hit with a barrage of discount vouchers ranging from 35% to 60% off, so much so that I haven’t paid full price for a coffee yet.
For the record, their drinks are priced around $8 each, and they’re somewhere between a Starbucks tall and grande. The discounting is an aggressive acquisition play, of course, but I’ll take low margin coffees while they last, not least because they’re actually pretty tasty! Like Starbucks Reserve, they offer a range of single-origin specialty coffees, and even give you beautiful little info cards with tasting notes. The absurdity is not lost on me, given their quick-service image it’s like if McDonald’s gave you facts on the cow in your Quarter Pounder. Nevertheless, on that day my Luckin outlet was closed, I went to a Starbucks for the first time in quite a while, and it felt like comparatively much less value.
===
From clicks to chats
I went over a new trend report my company put out, and there’s a large focus on generative AI as one might expect, which led to a couple of interesting conversations about how much work is ahead of us when it comes to overhauling the touch points we use to deal with merchants, service providers, and even governments.
In a way, the graphical interfaces we use today evolved as proxies for “natural” verbal and gestural communication. Similar to how we used mouse cursors because we couldn’t touch displays directly, we have menus and buttons and screens filled with data because we can’t directly ask computers for complex outcomes. The promise of large language models is that now we might.
There have been think pieces this week about how Apple was “caught off guard” by this gen AI wave and is now scrambling to catch up. I think they have plenty of time; here’s why:
What’s at stake isn’t smaller-scale improvements like the transformer-based autocorrect in iOS 17; it’s about whether gen AI can bring a more radical change in how we use computers. You can already see the hunger for this — the dream of J.A.R.V.I.S. — in a dozen half-baked AI-powered product announcements. We’re not far from Humane’s wearable phone alternative, Rewind’s Pendant which will process everything you say or hear, and Meta has great-looking new “smart” Ray-Bans which can put their new AI voice assistant(s) in your ear (US-only).
The basic version of conversing with AI looks like a text chat, and on the other end of the spectrum is a “multimodal” natural chat that takes a user’s body language, tone, and facial expressions into account. Putting aside the fact that such a model hasn’t been trained yet, just the massive amount of personal data this would involve means only a company positioned to put privacy first might get any traction. And then there’s the staggering hardware requirements of doing this in real time. If only someone was working on a new kind of computer equipped with industry leading silicone, and biofeedback sensing that can even predict you’ll tap a button before you do it…
Assuming this is the right thing to do at all, the Apple Vision Pro with its microphones, retina scanners, and hand-tracking cameras should be well positioned for a future where you can simply sit down in front of an AI relationship manager from your bank, have a free-flowing discussion, and see the appropriate figures and charts pop up — instead of poking around a UI to find out how your money is doing. But the stated purpose of the Vision Pro is spatial computing, which is only a step towards natural computing.
So like every other time in history, Apple will wait while others jump the shark first, and hopefully clean up after with a more sensible execution. They have the time; it’s just a shame for impatient people that the hardware looks so ready. But as a wise man once said: technology moves fast, while people change slowly.
===
Pints and pop music
Ex-colleague and friend Bert is back in town for the first time in over four years, so we met up twice to catch up and see some other faces we’ve missed in recent years. This meant many pints of Guinness (if ever I associated a person with one drink only, it’s Bert and Guinness), which compounded with the aforementioned bourbon and a gut-busting, sodium-loaded visit to Coucou hotpot for a physically taxing week. Every organ is straining to detoxify and I really felt the effects all weekend.
We watched half of the third season of Only Murders in the Building and I’m happy to report it’s much better than the second. I think there’s even a self-effacing joke at one point about how the first season of the in-show podcast was more likable than the second. It comes down to a clearer story with fewer detours, the kind I’ll probably remember a year from now unlike, say, season two’s.
Sigrid has a new four-song EP out: The Hype. It is extremely Gen-Z in that it has a shoddy photograph on the cover and looks like it was made in Canva. The music is much better, but she’s just doing more of the same, which I won’t complain about because if Coldplay just kept doing the same thing as in their early albums maybe they wouldn’t be so insufferable today.
Taylor Swift’s 1989 is finally out in (Taylor’s Version) form! I think this was the first album of hers to ditch the country pop style and just go straight pop? Did it have something to do with leaving Nashville for New York? In any case, it was the first album of hers I played for myself, and also the one Ryan Adams liked enough to cover I guess. I’ve been mostly playing both versions this weekend and came across a debate I never knew I needed: is 1989 a beach album about the Hamptons or a city album?
I used to (sporadically) log my mood and mental state in a great free app called How We Feel, but ever since iOS 17 came out with a similar feature in the Health.app, I’ve been doing it there. It’s nowhere as good, though, and the act of recording how you feel is (surprise!) so much better in How We Feel. Apple’s version makes you scroll a list of feelings like Anxious, Content, and Sad, sorted in alphabetical order.
The other app arranges feelings in a colorful 2×2 grid, from high to low energy, from unpleasant to pleasant. An example of a high-energy unpleasant feeling is Terrified, while a low-energy pleasant feeling might be Serene. This grid is a much more logical and visual way to find the right word and quickly record your feelings throughout the day. Anyway, the rumor is that iOS 17.1 will be out next week, and I’m hoping the new Journal app is part of it, because I want better ways to record and look back on my state of mind.
===
We attended the local premiere of Martin Scorsese’s new film that everyone’s talking about online: Killers of the Flower Moon. In a theater, no less! It’s an Apple Original Film, and will be coming to Apple TV+ after this irl run is over. I can’t remember the last 3.5 hour film I saw under such circumstances, unable to take a break, forced to focus. If I’d seen it at home I’d probably have paused it no less than five times, and so I’m glad that I couldn’t, because it’s the kind of film that quietly spends its budget building a world so absolutely intact and complete that you’re left to focus on the people, the time, and the weight of its historical crimes. As a true story, it’s devastating. “People are the worst” is pretty much my 4-star Letterboxd review.
On the flip side, we saw disgraced filmmaker Woody Allen’s 2019 film, A Rainy Day in New York, which has pretty poor ratings online, and really enjoyed it. I’m aware that he has approximately, oh… one style? And a hallmark of it is neurotic, pretentious characters in awkward romantic situations who spout smart alecky jokes in an artificial, stage performance cadence… but I like it. It’s also amusing to see current generation stars like Timothée Chalamet and Elle Fanning as his stars, but playing their roles exactly like Woody. Is it because they’ve seen his old films and think they have to? Or do the scripts just demand that delivery? Also, Selena Gomez is in it, and I can’t help but see this performance as a superior version of what she does in Only Murders in the Building.
===
I got jabbed for Hepatitis A & B on Friday, and it was a doozy. I felt lightheaded and weird all afternoon afterwards, and I have to go back for two more boosters over the next few months.
Contributing to the feeling all weekend has been my new contact lenses, the first ones I’ve worn in maybe 8 years? The right eye prescription is a little underpowered and so I’m suffering with blurry images that are driving me crazy. I’ll need to try and get them exchanged next week.
Why am I wearing them at all? I got an annoying pimple/scratch behind one ear, exactly where the arm of my glasses sit, and so I decided on some disposable dailies while it heals. On one hand, the feeling of freedom is amazing — I really miss this about wearing contacts, which I did regularly in my younger days. Just things like being able to do a spontaneous facepalm! But now everyone has learnt that “my look” is “guy with glasses”, and suddenly my normal face looks weird, even to me gazing in the mirror, and I don’t need to freak people out any more than necessary.
The blurriness has had a slight impact on my enjoyment of Super Mario Wonder, the latest and greatest Mario game which just came out. I wasn’t planning to buy it, because I wasn’t planning to play it any time soon, being still in the middle of another old Mario game on the Switch, Super Mario 3D World + Bowser’s Revenge. But peer pressure got to me, and talking to Jussi got me justifying it to myself that playing a 2D and 3D Mario game at the same time isn’t a problem — it’s like reading a fiction and non-fiction book at the same time!
Super Mario Wonder is the 2D one, for the uninitiated. It’s a modern take on the classic Mario games, far more inventive and deep-reaching than even the New Super Mario Bros. series of games that tried to breathe new life into the side-scrolling platforming formula. Wonder has incredibly detailed and expressive animations all throughout: Mario and friends move and react to things like characters in a proper animated movie (was this planned to coincide with this year’s film? I don’t know), bursting with character, while the levels and events are literally psychedelic festivals of invention. This is a blockbuster game that spends its budget conspicuously, gleefully.
===
In playing with DALL•E 3 some more (within ChatGPT Plus), I discovered that it goes a great job of replicating the look of classic 80s anime. You literally just have to ask it for that. I tried some classic scenes, and then asked for couples hanging out near a 7-Eleven drinking Strong Zero, and then for screenshots from a movie about a female detective investigating a case of financial fraud, and it’s that last one that made me think this thing is a new milestone in tools for visualizing stories.
There was a period about a year ago when quite a few new moms all had ideas for children’s books, and wanted to use DALL•E or Midjourney to illustrate them. I got questions about whether it was feasible to do this, and if you’ve been talking everyone’s head off about this stuff too, you probably had the same conversations.
I think this level of natural language interface with GPT-4 and DALL•E 3 coming together is finally making it possible for anyone to direct images with consistent settings and characters. I read somewhere that Midjourney v6 is going to make prompting easier as well, so perhaps we’ll get a flood of storybooks next year.
There was also a thing going around on Threads that basically asked participants to “paste your Threads bio into an AI art tool” and see what comes out. I saw a few people doing this, all floored by the accuracy of the people they saw gazing back through the black mirror, I suspect afraid of how accurately they were seen from just a few keywords — one lady said “I own all of those tops”.
I think this is a pretty strong signal for the mainstreaming of generative AI, that a meme like this can spread without instructions attached. Everyone who is online enough knows what it means to invoke an electronic genie that grants image wishes, knows very well how to go find one and get the deed done. Next year is going to be wild.
But anyway I wanted to try it out, although my bio isn’t like “Founder/CEO (he/him), hustling 24/7 🇸🇬, new book out 20/12, always up for coffee ☕️ and meetups 🤝”; it’s currently “Designer, sense-maker, aesthete, imposter, garbage, scum.” which gives you results like this:
The nation voted for a new president this weekend and the winner was Mr. Tharman Shanmugaratnam, which autocorrect changes to “That Man” (a tad disrespectful in my opinion). He got 70% of the vote which is pretty solid, but nobody’s surprised on account of how well liked and competent he is. It’s worth mentioning how painless the process was: my vote was in the ballot box less than three minutes after I showed up, and I was back home watching TV in 15.
Appropriately, we started Jury Duty on Amazon Prime Video and I think it’s gonna be great. It’s a pseudo-reality show where one man thinks he’s on the jury for an actual case but the whole thing is staged and everyone else is an actor. I’m watching this and wondering if everyone’s following a tight script or just improvising based on their characters, because there are events happening all the time whether the mark witnesses them or not.
That real-time play concept always makes me think of The Last Express, a classic but underplayed PC game by Jordan Mechner set on the Orient Express. It kicks off with a murder onboard and you have to move around the train investigating and staying alive amidst political intrigue and wartime spy stuff. Events are always happening, and if you’re not in the right place at the right time, you’ll miss crucial conversations. You can experience this for yourself on iOS but the app hasn’t been updated in five years and may be removed by Apple soon if they stick to their controversial plans.
A lot of other TV was seen. We finally finished season 3 of For All Mankind, an extremely strong show on Apple TV+. I binged all of the anime Oshi no Ko which is as great as everyone says; I don’t think I’ve ever seen a stronger (or longer) first episode. It’s a 90-minute movie in itself. I’m now midway through another highly rated anime: last year’s Lycoris Recoil, on Netflix. And on Michael’s recommendation we started on a Japanese drama, My Dear Exes, which is very enjoyable so far, maybe because it doesn’t feel like typical Japanese TV. It’s snappier and funnier somehow.
Oh, if Jordan Mechner sounded familiar earlier, it’s because he’s the man who created Karateka and Prince of Persia. And if you want to experience the making of a gaming classic, a new playable history lesson on The Making of Karateka is now out. And in a case of lovely things cosmically coming together, it was helmed by former Wired games editor Chris Kohler, who also wrote the article on Japanese curry that probably changed my life.
Staying on topic, we went down to the Japan Rail Cafe (operated by the actual JR East railway company from Japan, for some reason) in Tanjong Pagar because I’d heard they were doing a tie-up with the Kanazawa style Japanese chain, Champion Curry, for one month only. I had my first Champion Curry back in March, after meaning to check it out for years, and while it didn’t unseat my current favorites, it was still decent by Japanese standards and incredible by Singaporean ones. They sold a small sized plate here for S$19.90 including a drink, but it was sadly inauthentic. The curry’s consistency and deployment over the rice is not going to qualify for a Kanazawa cultural medallion any time soon, but I guess it was good enough that I’d take it any day over most local competition. But I still hope they open a proper operation locally someday and accomplish what Go Go Curry failed to do.
Champion Curry in JapanChampion Curry in Singapore
===
I was suddenly inspired to make a new series of playlists, which will periodically capture what I’m listening to, sequenced like a proper mixtape. If I had the skills to make a DJ mix of them, I would! Here’s BLixTape #1 for my Apple Music fam.
And the tracklist for people still on *ahem* lesser services:
Gold -Mata Au Hi Made- (Taku’s Twice Upon A Time Remix) — Hikaru Utada (I said I wasn’t a fan of the regular version but this remix works!)
TGIF — XG
bad idea right? — Olivia Rodrigo
You Are Not My Friend — Tessa Violet
Dancing In The Courthouse — Dominic Fike
For Granted — Yaeji
Bittersweet Goodbye — Issey Cross
To be honest (SG Lewis Remix) — Christine and the Queens
Making this involved a detour into the world of NewJeans’ music videos, which are pretty conceptually twisted and seem to comment on the parasocial relationships fans have with them. For example, in the mostly sunny poppy video for ETA, the girls might only be hallucinations seen by a sick fan, telling her that her boyfriend is cheating on her with someone at a party. So she ends up murdering him and the girl with her car! I guess this is what it takes to stand out now.
Let’s end on a nice note with another video I came across on YouTube while checking out more electronic music. This guy Don Whiting also does a great job killing it on the road — performing a two-hour drum & bass set on a bike, accompanied by a huge entourage of other cyclists. It looks like an awesome day out, at a pace even I could probably handle.