FilmNerd is your friendly movie buff for deep dives into cinema history, critiques, and all things film! 🎬
Here’s a fun chatbot for those times you want debate a film but your friends haven’t seen it or have had enough of your bullshit. It’s up for all sorts of questions and hypothetical arguments, and I’m learning a lot just talking to it.
Example: I asked for a film where a bolt of lightning was at the center of a major plot point and it said Back to the Future (of course!), and then we asked it whether that was a more worthy moment in lightning-centric film discourse than Thor, and it was able to provide compelling arguments both ways.
This week in artificial intelligence was a big one: Humane unveiled their highly-anticipated wearable, while OpenAI made strides with ChatGPT enhancements.
The Humane Ai Pin
A lot has already been said about the letdown that the Humane reveal was, mostly by people confused by the presentation style of the two ex-Apple employees who founded the company.
If you’ve seen Apple events and Humane’s 10-minute launch video, you’ll note the contrast in delivery and positioning. Apple tries to couch features and designs in real-life use cases, and show authentic enthusiasm for what they do to improve customers’ lives (Steve was unmatched at this). Humane kicked off with all the warmth of a freezer aisle, missing the chance to sell us on why their AI Pin wasn’t just another tech trinket in an already cluttered drawer. They puzzlingly started with how there are three colors available and it’ll come with extra batteries you can swap out, before even saying what the thing does! The rules of storytelling are quite well established, and why they chose to ignore them is a mystery.
A lot was also said about how two key facts in the video presentation, provided by the AI assistant so central to their product, turned out to be inaccurate. One was about the upcoming solar eclipse in 2024 (and Humane’s logo is an eclipse! How do you get this wrong?), and the other was an estimate of how much protein a handful of nuts has. It’s a stunning lack of attention to detail that this was not fact-checked in a prerecorded video.
Personally, I have been waiting for the past five years to see what this stealth startup was going to launch, and as the rumors and leaks came out, I was extremely excited to see an alternative vision for how we interact with computers and personal technology. What they showed did not actually stray from what we knew. An intelligent computer that sees what you see, is controlled by natural language, and is able to synthesize the world’s knowledge and project it onto your hand in response to queries is amazing!
The hardware looks good, channeling the iPhone 5’s design language to my eyes, and I’ll bet they had to pioneer new ideas in miniaturization and engineering to get it down to that size. I expected it to cost as much as an iPhone, but it’s only $699 USD, which feels astoundingly low. That’s not much more than what we used to pay for a large-storage iPod.
The disappointment is in their strategy. By positioning it as a replacement for your phone rather than an accessory, they’ve reduced the total addressable market to a few curious early adopters and people who want to address having a tech or screen addiction. The kind who intentionally buy featurephones in 2023. I think their anti-screen stance is interesting, but it doesn’t win over the critical mass necessary to scale and challenge norms.
The Ai Pin comes with its own phone line for messages and calls (for $24/mo), so it’s not going to be convenient to use this alongside your phone, and I would not give up my phone while this is still half-baked — I say this kindly, because even the iPhone launched half-baked in many ways. For many things that we have become accustomed to in life, there is no substitute for a high-definition Retina display capable of showing images, video, and detailed or private information when necessary.
Do I believe that Apple can one day get Siri to the level of competence that OpenAI has? I have to hope, because the Apple Watch is probably a better place for an AI assistant to live than in a magnetically attached square on my T-shirt. In any case, Humane seem to have taken a leaf out of their old employers’ playbook, and will be releasing this first version only in the US, and so whether or not I would buy one is a moot point.
OpenAI and GPTs
Speaking of OpenAI, it would seem that they’re still the team to beat when it comes to foundation models. The playing field is full of open-source alternatives now, including Lee Kai-Fu’s 01.ai and their Yi-series models, but as a do-it-all company offering dependable access to dependable AI, OpenAI seems unassailable.
They announced enhancements to their models, increasing context windows and speeds while halving prices for developers, and launched a new consumer-friendly product: customized instances of ChatGPT that work like dedicated apps, which they call “GPTs”. In effect, these are a version of Custom Instructions which were introduced earlier this year as a way to tell ChatGPT how to behave across all chats. But sometimes you’re a researcher at work and sometimes you want to have some dumb fun, thus I’m not sure they caught on.
So now GPTs let you specify (pre-prompt?) different contexts and neatly turn them into separate tools for different purposes. Importantly, you can now also upload knowledge in the form of files and documents for the agents’ reference in generating replies. This makes them more powerful and app-like, and normal people like me with no coding ability can create them by telling a bot what they want (in natural language, of course), or writing prompts directly. I recommend the latter, because chatting with the “Create” front-end tends to oversimplify your instructions over time and you risk losing a lot of detail about how you want it to work and interact with users.
So what does the launch of these GPTs mean? Well, for many of the developers who were riding the OpenAI wave and only used their APIs to build simplistic wrapper apps, it’s a sudden shift in the tide and they’re now forced to build things that aren’t reducible to mere prompts.
What we’ll soon see is a GPT gold rush. Brace yourself for a stampede of AI prospectors, each hunting for their piece of OpenAI’s bonanza — the company will be curating and offering GPTs in a “Store” and sharing revenue with creators. That’s a different model than their APIs where developers pay OpenAI for compute and charge users in turn. Here, users all pay OpenAI a flat fee for ChatGPT Plus and can use community-made GPTs all they want (within the rate limits).
Hear everyone talking about a viral GPT that makes it so easy to do X? When you want to try it out, you’ll see a call-to-action to sign up for ChatGPT Plus. This signals to me that launching GPTs is a strategy to drive paid account conversion, which begins the lock-in that OpenAI needs in order to make ChatGPT the new OS for services, not unlike how WeChat is the base layer that runs China, regardless of whether you use iOS or Android. Eventually you won’t even need to know about or choose the GPTs you use; the master ChatGPT system will call them as necessary. We may not be headed for a screen-less future, but we’ll probably see an app-less one.
My GPT projects
Of course I’m playing with this and making some of my own! Did you think I wouldn’t, given the ability to create AI things without coding?
I’ve got a list of ideas to work on, and so far I’ve acted on three of them, which are explained on this blog in separate posts.
✨ PixelGenius was my first, and contains the most complex prompt I’ve ever written. It started out as a tool to generate photo editing presets/filters that you can use on your own in any sufficiently advanced photo editing app with curves, H/S/L controls, and color grading options. You can just say “I want to achieve the look of Fujifilm Astia slide film” and it’ll tell you how to do that. But now it does more than just make presets, which you can find out about here. More details and examples in the blog post here.
SleepyTalesSleepyKills
😴 SleepyTales was the second, and I’m still amazed at how good it is. It’s designed for Voice Conversations mode (currently only in the mobile app), so you can get a realistic human voice reading you original (and also interactive, if desired) bedtime stories. These are never-ending, long, and absolutely boring tales with no real point, in drama-free settings, told in a cozy and peaceful manner. It’s the storytelling equivalent of watching paint dry, yet oddly mesmerizing. More on this and the next one here.
🥱 SleepyKills 🔪 was born from a hilarious misread — I told Cien about it and ‘mundane’ became ‘murder’. So if your bedtime stories of choice are usually true crime podcasts, then you’re in luck. This GPT agent will create an infinite number of dreary murder stories, but stripped of all suspense, mystery, and excitement. They’re about as exciting as real police work, not the flashy TV investigating sort. Again, I still can’t believe how cool it is to hear these being written and read in real time.
People have said the Voice Conversations feature is a game-changer for ChatGPT, but I didn’t really get it at first when using it for general queries. IMO, the killer app for it is storytelling. I’ve been using the voice called Sky for both the above bedtime stories apps, and it works well.
Films
I watched David Fincher’s new film The Killer in bed on my iPad, just like he would want me to. Even then, it was spectacular, a cinematic victory lap for both him and Michael Fassbender. It plays with genre conventions, expectations, and riffs off his own body of work. There are some great moments and a fantastic performance by Tilda Swinton. 4.5 stars.
Speaking of performances by English actors, I also watched Guy Ritchie’s Operation Fortune: Ruse de Guerre, which is both a terrible name and attempt at creating a new globetrotting spy/special ops team franchise. But, he has a certain touch even when making shit, and the film is a hell of a lot of fun, bringing out the best in Jason Statham (who tried to hold up The Expendables 4 and failed), as well as a villainous turn from Hugh Grant that — I shit you not — is easily a Top 10 career highlight for him. Jason Statham in the right hands is a very different animal than when he’s doing B material; I don’t know how to explain it. I actually gave it 4 stars on Letterboxd and won’t take it back.
Album of the week
REM’s Up received a 25th Anniversary Edition, with some tracks seemingly remastered and a whole second “disc” of an unreleased live performance they recorded on the set of the TV show Party of Five?! Sadly it is not a track-for-track live performance of the album, which would have been great. There’s no Dolby Atmos here either, so I’m just taking this as an opportunity to revisit this album.
I can still feel the gut punch from the day Bill Berry bowed out, post-aneurysm. I was afraid they might break up, and REM was absolutely my favorite band back then (maybe still), so when Up came out, I was hopeful for a new and long-lived chapter to begin. And yeah, it was a weird album, playing with new sounds and using drum machines — not unlike The Smashing Pumpkins’ Adore album after Jimmy Chamberlin left. But many songs were great, some even recognizably REM. The band kept going for a few more albums, each a new spin on an evolving sound. And in true style, they dropped the mic at just the right moment.
😴 SleepyTales: Spins long and boring stories to help you unwind and fall asleep. Designed for voice mode, turn it on and chill…
This is a GPT designed to be used with ChatGPT’s “Voice Conversations” mode (currently only in the mobile app) — although you can use it to generate text alone, it really shines when paired with one of their realistic voices. I currently prefer the one called Sky. Like it says in the description above, this GPT agent has been prompted to provide tension-free, inconsequential, meandering stories about anything you like. It reads them out in a slow, gentle manner, for quite awhile at a stretch.
So just turn on voice mode and pop your phone on the nightstand and listen to the most boring stories ever. Unfortunately, I’m unable to make it speak indefinitely without building an app, so it will occasionally stop and ask if it should keep going. You can say “yeah” or even “mmhmm”, and it will. Or you can give it some direction. Hint: just try and get it to make the story more exciting, I don’t think you’ll succeed!
And I suppose if you’ve nodded off and can’t tell it to continue, that’s a good thing? Nevertheless, I find its stories very good just for unwinding while still awake.
🥱 SleepyKills 🔪: A generative true crime podcast that couldn’t be more boring. Sleep tight!
While showing the former app to Cien, she misread “mundane” as “murder” and thought it generated boring true crime stories, to which I thought “WHY NOT!?”
And so SleepyKills was born, designed to emulate the language and style of a popular true crime podcast except… you might find it very hard to care? Firstly because the murder stories are completely generative and fictional, and secondly because they’re almost comically full of irrelevant details and lacking in any excitement or suspense. The AI podcaster often spends time on aspects of the case that no one else would want to know.
Check it out if that sounds like your kind of bedtime story.
Do you edit photos, use filters, or make your own presets? What if you had an AI tool to help create any look you asked for?
That’s ✨PixelGenius, my first “GPT” (a custom agent built on ChatGPT). It’s a photo editing expert that creates filters, suggests improvements, and helps you elevate your craft.
Describe a vibe and it’ll provide the settings to make a preset/filter.
Emulate a classic film stock!
Upload photos and get editing suggestions.
Reverse-engineer edited photos by providing a Before and After.
Learn editing techniques just by chatting naturally.
It’s designed to help beginners learn the art and color science of photo editing, while letting pros save time with great starting points. For every adjustment, it explains the intent so you learn how this stuff works.
It gives you standard adjustment values that you can plug into your favorite photo editing app like Darkroom, VSCO, Photomator, or Adobe Lightroom and save them as your own custom presets.
I prefer to learn by trying stuff out rather than watching videos or whatever, so when I first started using Lightroom, it was a messy process of trial and error that lasted years. ✨PixelGenius turns that into an interactive, guided experience. It’s like having a photo editing expert on demand, and you can even get into deep conversations about color theory and photographic history. All you need is a ChatGPT Plus account.
This involved writing one of the most comprehensive prompts I’ve done so far, so I’d be curious to know your thoughts after you give it a go!
Kim was away again for work most of the week. This effectively gave me the gift of freedom from our Beyond Deck addiction, cold turkey. So I played a little Super Mario Wonder, drank too much bourbon, watched Bitcoin YouTube, and lazed around unproductively on my iPhone.
And on the days I decided to work from the office, I was richly rewarded with elevator breakdowns, mall escalator maintenance, and other inconveniences that made me wish I’d stayed home. I don’t think I’ve mentioned my recent obsession with Luckin Coffee, the Chinese chain that’s been expanding nationwide, including an outlet in my office building. Bosses forcing a return to office take note, getting one of their iced coconut lattes in the morning is one of the reasons I head in. I was looking forward to my fix one of these cursed mornings but found the branch closed for upgrading.
Luckin’s model is fully app-dependent and most branches are just pickup counters, with no seating. You order on your phone, pay via card or Apple Pay, and pick it up by scanning a QR code. You don’t interact with staff, and they don’t handle any filthy cash. For comparison, you can do the same thing with Starbucks’s app, as well as order in person like a boomer. To get you past the friction of downloading the app and setting up an account, your first drink at Luckin is just $0.99. Subsequently, you’ll get hit with a barrage of discount vouchers ranging from 35% to 60% off, so much so that I haven’t paid full price for a coffee yet.
For the record, their drinks are priced around $8 each, and they’re somewhere between a Starbucks tall and grande. The discounting is an aggressive acquisition play, of course, but I’ll take low margin coffees while they last, not least because they’re actually pretty tasty! Like Starbucks Reserve, they offer a range of single-origin specialty coffees, and even give you beautiful little info cards with tasting notes. The absurdity is not lost on me, given their quick-service image it’s like if McDonald’s gave you facts on the cow in your Quarter Pounder. Nevertheless, on that day my Luckin outlet was closed, I went to a Starbucks for the first time in quite a while, and it felt like comparatively much less value.
===
From clicks to chats
I went over a new trend report my company put out, and there’s a large focus on generative AI as one might expect, which led to a couple of interesting conversations about how much work is ahead of us when it comes to overhauling the touch points we use to deal with merchants, service providers, and even governments.
In a way, the graphical interfaces we use today evolved as proxies for “natural” verbal and gestural communication. Similar to how we used mouse cursors because we couldn’t touch displays directly, we have menus and buttons and screens filled with data because we can’t directly ask computers for complex outcomes. The promise of large language models is that now we might.
There have been think pieces this week about how Apple was “caught off guard” by this gen AI wave and is now scrambling to catch up. I think they have plenty of time; here’s why:
What’s at stake isn’t smaller-scale improvements like the transformer-based autocorrect in iOS 17; it’s about whether gen AI can bring a more radical change in how we use computers. You can already see the hunger for this — the dream of J.A.R.V.I.S. — in a dozen half-baked AI-powered product announcements. We’re not far from Humane’s wearable phone alternative, Rewind’s Pendant which will process everything you say or hear, and Meta has great-looking new “smart” Ray-Bans which can put their new AI voice assistant(s) in your ear (US-only).
The basic version of conversing with AI looks like a text chat, and on the other end of the spectrum is a “multimodal” natural chat that takes a user’s body language, tone, and facial expressions into account. Putting aside the fact that such a model hasn’t been trained yet, just the massive amount of personal data this would involve means only a company positioned to put privacy first might get any traction. And then there’s the staggering hardware requirements of doing this in real time. If only someone was working on a new kind of computer equipped with industry leading silicone, and biofeedback sensing that can even predict you’ll tap a button before you do it…
Assuming this is the right thing to do at all, the Apple Vision Pro with its microphones, retina scanners, and hand-tracking cameras should be well positioned for a future where you can simply sit down in front of an AI relationship manager from your bank, have a free-flowing discussion, and see the appropriate figures and charts pop up — instead of poking around a UI to find out how your money is doing. But the stated purpose of the Vision Pro is spatial computing, which is only a step towards natural computing.
So like every other time in history, Apple will wait while others jump the shark first, and hopefully clean up after with a more sensible execution. They have the time; it’s just a shame for impatient people that the hardware looks so ready. But as a wise man once said: technology moves fast, while people change slowly.
===
Pints and pop music
Ex-colleague and friend Bert is back in town for the first time in over four years, so we met up twice to catch up and see some other faces we’ve missed in recent years. This meant many pints of Guinness (if ever I associated a person with one drink only, it’s Bert and Guinness), which compounded with the aforementioned bourbon and a gut-busting, sodium-loaded visit to Coucou hotpot for a physically taxing week. Every organ is straining to detoxify and I really felt the effects all weekend.
We watched half of the third season of Only Murders in the Building and I’m happy to report it’s much better than the second. I think there’s even a self-effacing joke at one point about how the first season of the in-show podcast was more likable than the second. It comes down to a clearer story with fewer detours, the kind I’ll probably remember a year from now unlike, say, season two’s.
Sigrid has a new four-song EP out: The Hype. It is extremely Gen-Z in that it has a shoddy photograph on the cover and looks like it was made in Canva. The music is much better, but she’s just doing more of the same, which I won’t complain about because if Coldplay just kept doing the same thing as in their early albums maybe they wouldn’t be so insufferable today.
Taylor Swift’s 1989 is finally out in (Taylor’s Version) form! I think this was the first album of hers to ditch the country pop style and just go straight pop? Did it have something to do with leaving Nashville for New York? In any case, it was the first album of hers I played for myself, and also the one Ryan Adams liked enough to cover I guess. I’ve been mostly playing both versions this weekend and came across a debate I never knew I needed: is 1989 a beach album about the Hamptons or a city album?
Our fridge is dying. After some eight years of dutifully cooling and freezing our food reserves, it’s losing its mind. Like a soldier left to survive too long in the jungle, it can’t tell right from wrong anymore, and it’s probably a threat to someone’s life. It started midweek when I decided to get some ice-cream and found the unopened tub mushy and soft to the touch. Ditto blocks of frozen salmon — uh oh, not a good sign.
I’ve realized in recent years that I get disproportionately upset when things go wrong in the household. They’re like waves rattling loose the stones in my psychological seawall; things at home simply need to be predictable, dependable, safe. Maybe it’s the result of some trauma. Maybe the outside world is just too much sometimes.
A new fridge has been viewed and paid for now, it will be roused from its Korean factory-induced slumber this Monday and loaded up with every surviving vegetable and condiment. I get images of them as war refugees lining up to get on a boat. They’re the tough ones, made of more shelf stable stuff. Pour one out for their fallen brothers: the spoils of war.
Do you know what new fridges cost these days? I certainly did not. I’m pretty sure our last one was under S$1,000, but they cost more now. Blame inflation, the chip shortage, whatever, but the ones under a grand now are the brands that probably don’t come to mind when you think refrigerators: Whirlpool, Electrolux, Sharp, and local OEM brands you wouldn’t think of at all. So now we’ll have our very first Samsung product, if you don’t count the displays and components they make for others.
Coupled with the so-called seasonal downturn in the markets now underway (supposedly the August and September months before a US election year tend to see significant corrections), there have been quite a few conversations about everyone feeling poor and worried. More than usual, anyway. I know one has to take a long view of these things, but the lack of bright spots is a little daunting.
CNA put out a two-part documentary on Singapore’s fiscal reserves, promising unprecedented access and interviews, which I found quite enlightening. There was a visit to a secret warehouse literally filled with tons of gold, and stories about how this war chest came into being from the early days of our independence. It had not occurred to me before that our reserves were used to weather the 2008 finance crisis and Covid without issuing more debt, a luxury most countries did not have. Nor that one of the reasons we’re able to enjoy such a low tax rate is that annual income from invested assets helps to offset spending on public infrastructure.
Here are the episodes on YouTube:
===
I had fun this week with TikTok’s “Aged” filter, which is certainly not a new concept as far as apps are concerned, but it’s probably the most advanced execution yet. Through a blend of machine learning with harvested personal data from millions of non-consenting people and regular ol’ voodoo, it shows you what you’ll look like as a pensioner (should pension funds survive the financial end times). Some people have tested it on photos of celebrities when they were younger, and the aged photos reflect how they really look now, so… this is probably how you’ll turn out! Might as well get comfortable with it.
It turns out that old me will look kinda like one of my uncles, and I’ve been having fun recording aged videos in a wheezing voice and sending them to friends and colleagues.
Some of the other trending filters on TikTok are pretty sophisticated mini apps that involve a prompt box for generative AI. It takes a photo of you and will restyle it as a bronze statue, an anime girl, or whatever you ask it to do. They are also incredibly fast, compared to other generative AI image tools, which suggests Bytedance is burning some serious cash to power these models and gain AI mindshare.
I also came across a new product called BeFake that will try to take this one feature and turn it into an entire social media network based on posting creative generative AI selfies. It makes some sense — you don’t have to be camera ready (already a low bar with some of the beauty filters now available), and you can showcase wild ideas. Will this sweep the world only for people to get tired of unreality and swing back to finding “boring” posts interesting? Stranger things have happened.
===
On Sunday we went to the ArtScience Museum (at the Marina Bay Sands) for a rare high-profile exhibition of digital art. Notes from the Ether says it’s focused on NFTs and AI, but it’s also got a lot of generative art that just happens to be encoded on blockchains. I was especially excited to see the inclusion of work by DEAFBEEF and Emily Xie (Memories of Qilin), and Tyler Hobbs and Dandelion Wist’s QQL project was also presented for anyone to play with.
Obviously this movement is in a weird sort of place at the moment. Valuations for most projects are as volatile as shitcoins, and a few “blue chip” projects like the ones displayed are more stable, but only about as much as bitcoin. Because NFT art is defined in large part by the medium, which is currently inseparable from talk of price and value, it’s hard to have a viewing experience divorced from these considerations. You don’t really visit a Monet exhibition and think about how much everything costs. Which is why the Open Editions I mentioned last week are interesting, and likewise with this event, which offers you a free NFT at the end. You get to co-create an artwork with an AI engine by uploading a photo of your own to be transformed, and it’s minted as a Tezos NFT if you’d like. I thought it was a very cool collectible to remember our visit by.
I don’t think I’ve ever seen more affordable tickets at this museum, just S$6 with a further 30% off if you sign up for a free “Sands Lifestyle” account, so there’s little excuse not to go if you’re remotely interested in this stuff.
Since we were already there, we also hopped into Sensory Odyssey: Into the Heart of Our Living World which pairs 8K video projections of natural scenes with immersive sounds and scents. In one space you’re smelling fresh air and damp earth in a rainforest, and in the next you’re underground with mole rats. It’s very cool, but ruined by small children being allowed to run loose in front of screens (can’t really be helped), and elderly museum staff loudly declaring that “this is a night savannah, very dark, no need to be scared!” (can be helped with training) in such a way that any illusion of being in a savannah is totally pierced — unless you’ve gone on a safari tour with a gaggle of Singaporean aunties, of course.
A tough and tiring week under dispiriting circumstances. But in the grand scheme of things, the worry is optional and the problems are irrelevant. So I remind myself!
It’s Thursday night as I write some of this in advance and we fly for Melbourne tomorrow night. I am ungraciously unpacked, a rarity. I’m hoping to fit everything into a single cabin bag for the first time. I’m traveling light. No cameras, no gear, and no plans to bring any shopping home. The mission seems to be merely spending a week on another continent. Okay maybe I’ll bring my Switch.
At work, I started doing team updates as a newsletter. I ask everyone to send me what they’ve been up to, and they’re free to write a few lines or a bullet list. I chuck all of it into ChatGPT using a fairly specific prompt, and out pops an entertaining roundup of the week that reads like a news radio show.
It strikes me that I could easily do the same for these weeknotes right here, except for the times I go off and end up writing 1,000 words on something (which is quite often). I hope the act of a human spending their valuable human life minutes every week to write these updates by hand makes them more valuable than if I just ask an AI to elaborate. Lord knows the quality is close.
I came across this story about the potential for AI models to collapse as they’re trained on increasingly reflexive information generated by AIs, decaying like analog copies of a tape. This is of course what we’ve been wondering about: can AI keep learning to create new things in the absence of new original inputs from humans?
And it might be inevitable, because there’s as yet no way to separate content that’s AI generated, and it’s going to be invisibly and thoroughly mixed into every pool of data. Even Amazon’s Mechanical Turk workers are using ChatGPT to do their work, which is explicitly meant to be human work. It’ll be interesting if years from now we look back at this moment in time when it looked like AI was going to take over everything but then suddenly fell apart and became unviable like seedlings in poisoned soil. Like HG Wells’ invaders succumbing to the common cold.
It took awhile but I finished reading Matt Alt’s Pure Invention: How Japan’s Pop Culture Conquered the World. My Goodreads count for the year so far is a pitiful TWO. Anyway the book is enjoyable and well done. It promised previously untold stories about the invention of karaoke, the Walkman, the Game Boy, and others, which I doubted heading in — don’t we all know these stories? Would there really be anything new here? But I definitely learnt some new details here, and Alt does a great job of stitching it all together into a decade-spanning thesis about innovation, globalization, and the power of culture.
In Melbourne now after a couple of nights of bad sleep, after a miserable red eye flight where I got maybe an hour of sleep, after staying awake most of Saturday. Finally rested on Sunday. Listening to Apple Music’s excellent playlist of songs produced by MIKE DEAN. Looking forward to a chill week of Nintendo, coffee, reading, a visit to my favorite museum of screen culture, and no expectations of doing much more.
Amongst last week’s music releases, I missed a new album from Bob Dylan. And from Ben Folds. If I’m out to shift blame, it’s more like Apple Music neglected to inform me about them. The algorithms could use some work.
Melbourne
I started generating Midjourney images for a conceptual series and am in the process of curating the collection. Maybe I’ll put it up in a separate post at some point next week. It’s called Strange Beach, and I’m shooting for “wrong”, trying to prompt my way to pictures that are subtly unsettling or unhinged, yet set on a sunny Hawaiian beach. Some not so subtly.
It was WWDC week and hours before the keynote event started, I was telling people that the thought of an Apple XR headset made me tired. I knew that if it really was happening, that the world would never be the same again, and we would be starting a whole new cycle of change: changes in the way we interact not just with computers, but entertainment, services, each other, and the hundreds of companies in our orbits. That takes a whole lot of energy and enthusiasm (positivity?) to prepare for, especially if you’re in one of the industries that will need to be an early mover.
And this is just my gut talking, but after the big reveal of the Apple Vision Pro, I felt that positivity surging through me. It was an exciting prospect — yes, it’s still a heavy thing strapped to your head, and it has the many limitations and intentional design constraints of any first-generation consumer product — but I felt that Apple thoughtfully got the experience foundations right (again). This looks like it could change the world in an exciting and additive way.
I can’t wait to try it out and get my own, but it will probably be the end of 2024 before it lands in Singapore. That gives everyone plenty of time to think about and design for a spatial computing future. Do I think the price is justified? Sure! It’s not really comparable to any other product at any price, which is the beauty of their ecosystem play (again).
On the downside, the technical achievements it contains are incredible, but will need to become more incredible very quickly. Over the next few years, it will need to become lighter, smaller, faster, cheaper to get us where this “vision” is pointing. Or perhaps they believe the parallel development of a photon passthrough technology (that is surely continuing internally) will pay off before then, and become the solution. I’m referring to true AR glasses, of course, rather than this VR headset that acts like glasses by having screens facing inwards and outwards.
Side note on those outward-facing eye screens: it’s funny how that detail was completely leaked, and we knew it might have screens that showed your eyes to others, but nobody could come up with a way that it didn’t look awful. And yet, the real thing looks pretty good! Dimming and blurring a virtual avatar’s eyes so that they looked recessed behind frosted glass? Brilliant. Wanna put a pair of comedy Vision Pros on? Try this Snapchat lens — it’s super amusing when pointed at the TV.
But let’s not forget the other things announced at WWDC. I’m super excited for iOS 17’s Journal app*, as I said several weeks ago; the new AirPods Pro adaptive mode sounds exactly like what I’ve been wanting for awhile; Freeform showed that it isn’t being neglected, with some great looking new drawing tools coming; and the Apple Watch really did get a good rethink of the UI! The Side Button will now pull up Control Center instead of the Dock I never use, and it’s being replaced with a new Smart Stack model that sounds good in principle. And that new Snoopy and Woodstock watchface? Plus a smarter transformer-based keyboard and dictation? A more easily invoked Siri? Wow! (Ten bucks says a transformer-enhanced Siri is in the works for next year.)
Sadly, Apple Music only got light design refinements instead of the rethink I was hoping for, oh well.
*The Verge’s Victoria Song is skeptical about Journal.app because it relies on AI to suggest journaling prompts, which as Apple’s Photo Memories have proven, can be inappropriate or tone deaf. Personally I’m just planning to use it as a lifelogging tool: where I went, what I saw, what I was listening to. I’ll probably write entries manually, no prompts needed.
===
On Thursday evening I checked out the National University’s industrial design program’s graduation show with some colleagues who came out of the program a few years ago. There were some thoughtful projects and most were well presented. The kids are alright, etc.
Then on Friday evening I went with some other team members to visit the Night Safari for the first time in probably many years. The iPhone 14 Pro’s camera let me down by defaulting to very long night mode shots even when there were moving animals. I’m talking hold-still-for-10-seconds type situations. I wasn’t using Halide as I wanted Apple’s smart processing to light up the dark as much as possible, but it didn’t seem to make the right trade offs.
It continues to be super hot and muggy here; I was sweating my butt off both nights outdoors. Looking forward to the cool Melburnian winter weather in a couple of weeks.
===
Inspired by the album listening technique of Pearl Acoustics’ Harvey Lovegrove (mentioned last week) — put it on all the time in the background for a few days, and then sit down to listen to it once through properly, after it’s already soaked into your subconscious — I’ve been listening a lot to Cisco Swank’s new debut album, More Better. It’s a seamless blend of jazz, hip-hop, and soul that the New York Times quoted a fellow musician describing it as “black music. All of it.”
Speaking of music, Kim returned from her trip to the US and brought me back an unexpected gift: a pair of the new Beats Studio Buds+ with the translucent case! I was coveting them but probably wouldn’t have bought them for myself, and they’re still not available locally with no release date either. But since I have them now I can’t complain. #blessed
I started playing Astral Chain on the Nintendo Switch, a stylish beat-em-up title that came out very early in the console’s life and looks astonishingly good, period. I’m now putting Bayonetta 3 on my wishlist because Platinum obviously knows how to get incredible visuals out of this aging hardware.