Tag: Artificial Intelligence

  • Week 6.24

    Week 6.24

    It was Chinese New Year week and it’s not been the same for several years, partly because of Covid and because many key relations have simply gotten old. My uncles and aunties, for decades only known to me by honorary titles describing their places in the family hierarchy, are getting ill, weak, and unwilling to leave their homes or receive visitors. Several years ago I started to ask what their actual names were, because I grew up literally calling them names that translated to second son of third uncle, or eldest paternal uncle; I’ve often wondered why Chinese culture has mechanisms that foster emotional distance between children and adults, a tradition that feels increasingly out of touch with today’s world. But that’s how it goes.

    I remember the typical CNY reunion dinner for many years as something to get through with gritted teeth and withheld snarky jokes, and if you’d told me then that I would look back now on those as the good times, I would have despaired. Now they appear as charmingly awkward and well-meaning attempts to bond, by people I might never get to wish a happy new year to again. I always thought the idea of an annual reunion made more sense against the backdrop of a vast country like China, but now that it’s hard to connect for all sorts of reasons even across our tiny one, I see the real terrain is time and memory, and so many relationships die starving in those fields. That’s how it goes too.

    ===

    One of our friends lost his dad this week. He was by all accounts the kind of guy you call a ‘real character’. He went from a corporate career to a becoming writer in retirement, putting out three books in the last decade or so. Because we didn’t have any reunion dinner commitments at the usual time, we were able to attend his funeral wake and share in some lovely stories from the family. They managed the joke that this terrible timing right in the middle of festivities was the last prank he’d play on them. I learnt he had a regular blog, which he kept going even after suffering health setbacks — that’s dedication. Every week I wonder what I’m going to say here and always think the streak’s about to end.

    Even the funeral was remarkable for the fact that he planned it all himself, leaving behind detailed instructions on what he wanted, to the point of getting his own headshot taken and sealed in an envelope which his family only opened after he was gone, so they wouldn’t have to fuss over this stuff at the worst time — that’s love. It got me thinking that everyone should prepare their own playlists and slideshows too. I might get started on it this year. Don’t be surprised if you come to my funeral and hear that Chinese AI dub of Van Morrison on repeat.

    ===

    Other activity:

    • I’ve started on a new book and only read two short chapters but I already know this is something special. It’s People Collide by Isle McElroy.
    • We are back watching Below Deck and I’m still sure that this is one of the best management trainings you can get for your time. Every single season is full of unnecessary crew drama because people don’t communicate expectations, don’t provide clear feedback, and allow emotional reactions to escalate. I get that it’s not easy, and I’m not sure I do it so well myself, but the lines between action and consequence are so clear here; they’re literally edited together for entertainment. Another lesson: everyone is flawed. The people you root for because they’re usually sensible? Sometimes they fuck up! Working relationships are rarely black and white.
    • Kanye West finally released Vultures two months its initial release date, with a new cover and possibly a different tracklist. I haven’t heard it yet. But once again, Apple Music has not recommended me music I’m interested in, although I’m quasi-certain it will be featured somewhere in the app in the next few days. Right now the Singapore ‘Browse’ page is full of Chinese New Year related music that I definitely do not care about.
    • After working too hard and not getting enough rest, Kim’s sort of fallen ill now, with me feeling a milder version of it. The timing could not be worse: we’re off on holiday soon, the kind where sneezing and aching and feeling weak might derail a complex itinerary.
    • Speaking of which, I used ChatGPT to help plan this vacation, and I’ve taken those instructions and made a custom GPT called AI-tinerary which might help you if you’re going someplace new and want to create a multi-day schedule of things to do. It can work off your individual interests and transport modes, as well as answer other travel-related questions you may have. At some point I’ll be able to make it plot your route on a map (if you ask it now, it’ll generate some wacky DALL•E map drawing that you should absolutely NOT follow).
    • You should know by next weekend whether these AI-generated plans worked out, or if we tried to stay in towns that don’t actually exist.
  • Week 3.24

    Week 3.24

    I have come down from last week’s AI overpositivity and retaken control of this week’s update. I don’t know what came over me, especially when it’s so easy to see the issues that this current gen AI fever,this onslaught of enshittification, has yet to unleash. We’re poisoning a well, or maybe an orchard, that many people have spent decades building and many more depend on even if they don’t know it. I had two conversations on Monday, one about the disadvantageous state of jobs for 20-somethings and another about the Apple Vision Pro, and found myself in both of them articulating a deep pessimism that I haven’t been able to shake. Even if you buy into accelerationism, there’s clearly a risk of multi-decade spoilage here that future generations will hate us for.

    On Apple Vision (which is what I think the overall product family is called), I mentioned to Brian that I’ve been seeing a lot of Meta’s Quest 3 TV advertising whenever I tune into programs on the UK’s Channel 4, and how they’ve gone from selling immersive VR experiences with the Quest 2 to AR use cases like learning to play the piano — the same territory that Apple’s staking out. And how it won’t be very long before the Android equivalents of the Vision Pro will gain market share, on account of being several times more affordable, but hoovering up eye movement data revealing customers’ intents, attention, and probably physiological info because none of these other manufacturers will take pains to deny developers access like Apple does. We’ve seen these playbooks before.

    Brian and I have also previously discussed the ability of conversational AI products to deeply profile their users, not just by knowing what you want to know about, but how you think, react, speak, and write — what kind of person you are. A conversational interface with generative AI, trained on large amounts of data, is nothing short of a profiling machine that sees you at a behavioral and psychological level. Combine that with knowledge about what draws your eyes and sets your heart racing, and an ad-supported AR headset with built-in AI assistant is a nightmare product that will inevitably be a hit at $499.

    Thinking of the battles that ethically minded designers will have to fight and probably lose, deep in organizations intent on deploying AR/VR dopamine and AI-powered enterprise doodads without question, is what makes me tired these days.

    Later in the week, Jose shared this update on the Fujitsu postal service software debacle in the UK, a case of irresponsibly deployed technology that literally ruined and ended human lives. And that’s just the legacy stuff without any newfangled AI.

    ===

    • I’m finding the first Slow Horses book to be less enjoyable than I expected, mostly because it feels like I’m just rewatching the first season of the Apple TV+ show, nothing less and nothing more. I sort of expected more side story or entertainment than was possible to film, but it’s a rather straightforward procedural. The TV series might be the rare adaptation that’s on par with its source material, in which case I won’t read the rest after this one and will wait to watch Gary Oldman fart his way through them instead.
    • The second season of Reacher fell into the sequel trap, going for more action, more teamwork, more humor, more repeated catchphrases (this did NOT work), and losing something of its charm in the process. They decided to portray him as a sort of humorless Arnie-type killing machine who doesn’t understand normal people’s thoughts, and that doesn’t seem right to me based on his characterization as an astute detective/observer of human nature in the books. I was also hoping they’d go the Slow Horses route and just make the books in order, but they instead jumped to the 11th novel, Bad Blood and Trouble, for this season. Reading this interview with showrunner Nick Santora though, I got the feeling that making Reacher indefinitely is not something anyone on the team takes for granted, so why not go for broke while the Amazon money is flowing? Still, the thrill of seeing Reacher with his team is a payoff that has to be earned, and it’s not the same if you haven’t seen him wandering America solo for ten seasons beforehand.
    • We’ve started season 3 of True Detective, and I’m really liking some of the things they do with blending the recollections of an old man fighting a fading mind, with the disorientation and terror of his present life; they are literally blended and linked with match cuts and unifying objects — in one flashback a full moon disappears above the detective, and we come back to the present to see a fill light has gone out during the interview, and he’s shaken out of his memories.
    • I’m new to the music of Claud, but their superb album Supermodel would have made one of my lists in 2023.
    • I fired up Lightroom to see what new features they’ve added, and there was a new Denoise tool that seems to use AI to generate missing detail — fine, it’s unavoidable — and AI-powered preset recommendations. With one click, I applied a dramatic preset to an old RAW file which made it extremely noisy, and with another click removed all of it and landed on an incredibly sharp and clean image. I’m a little sad about how hard it is for small indies to compete with Adobe on this stuff. Photomator has an ML-based auto enhance feature that really doesn’t work well, often overexposing and making white balance look worse, whereas the Auto button in Lightroom makes improvements 90% of the time.
  • Week 2.24

    Week 2.24

    This post was partly written by my blog assistant GPT from notes I gave, and partly transcribed by a Whisper-powered dictation app I’m testing, so it’s just dripping with that AI filth (but the human did edit).

    I’ll probably remember this week for feeling like the future finally arrived, thanks to three long-awaited developments taking up headlines.

    1. Apple Vision Pro – The Dawn of Spatial Computing

    • The Apple Vision Pro got its pre-order and launch dates. Sadly, it’s US-only for now, leaving me and many others on the sidelines. It promises to usher in a world where computing isn’t confined to screens and devices, but blends seamlessly with our physical spaces. Along with AI, we may see a new era of interface and interaction design land sooner than expected, alongside new levels of realism and intelligence I don’t think anyone is ready for. But as a sure sign that this early adopter is growing old, I’m feeling surprisingly wary of and unready for such a transition.

    2. A Milestone for Bitcoin: Spot ETFs approved in the US

    • In a historic move, the SEC approved 11 spot Bitcoin ETFs and they began trading on Thursday with a record-breaking amount of volume. Although against the original ethos of decentralization, this is still a big deal which legitimizes the cryptocurrency for audiences who want some exposure but can’t self-custody for some reason. After a decade of anticipation, this decision bridges the digital world with traditional finance, making a fully digital asset accessible through familiar investment channels.

    3. OpenAI GPT Store Finally Launches: A New Playground for AI Enthusiasts

    • As someone who’s been creating custom GPTs with ChatGPT, the launch of the OpenAI GPT Store is particularly interesting. Originally scheduled for last November, it finally went live but hasn’t set my feeds on fire just yet. To make things worse, the promised revenue sharing model won’t start until later, and again, only in the US at first. Still, this could be the App Store for a fast-evolving space. I’ve already seen a few advanced applications on the front page and will be keeping an eye on it.

    These advancements in computing, finance, and AI aren’t just incremental steps; they’re giant leaps in their respective fields. The Apple Vision Pro is set to literally put technology everywhere, the Bitcoin ETFs are proof that a “digital gold” can be taken as seriously as the real thing, and the OpenAI GPT Store shows how generative AI can let anyone become a “developer”. It’s like watching history being made in real-time.

    By the way, I made a fun new GPT called How We Got Here.

    ===

    So I’m watching this show, True Detective, which you may remember from like 10 years ago. The first season starred Matthew McConaughey and it was a huge hit for HBO that I liked a lot.

    But then when the second season came out, before I could get started, a lot of reviews came out calling it like the worst show ever. And even though Rachel McAdams and Colin Farrell were in it, it just wasn’t a hit the way the first one was. So I never got around to watching it.

    And here we are years later and Season FOUR is about to come out today with Jodie Foster and a whole new showrunner/director/writer involved and it’s getting a lot of buzz. People are excited for it.

    That’s when I realized that there was a Season Three, like I didn’t even know that it existed. So now I’m spending my weekend binging seasons two and three to get ready for four.

    Now this is not strictly necessary because every season is a completely new story with its own set of characters, but I just feel like the need to be complete about my True Detective experience.

    If you’re wondering how I have the time for this, it’s because Kim is again away on business, which also means that I can’t watch this week’s episode of Reacher. So I guess we’re going to watch the final two episodes back-to-back next weekend. I can’t believe it’s already over — reading the rest of the books will have to tide me over until next year.

  • How We Got Here

    Reverse engineers images to uncover the chain of events that led to them.

    Available with ChatGPT Plus

    I’ve seen a lot of GPTs that do interesting things with image recognition and DALL•E, like analyzing an image to understand the scene and then recreating it from scratch as a painting.

    But I wanted to do something different, like, “what if we could guess the stories behind the pictures?”. Obviously it’s for laughs, so this GPT is called “How We Got Here” and it theorizes a possible chain of events that took place before the picture to result in what you see. It works for any given scene or an object, but because of OpenAI’s restrictions it will refuse to comment on images of famous people.

    Here’s an example:

    1. A parliament session started with the introduction of a controversial bill: The Mandatory Karaoke Performance Act.
    2. The opposition leader, known for his baritone renditions of “My Way”, vehemently opposes the bill, fearing unfair advantages for tenors.
    3. During the debate, an amendment is proposed: All speeches must be sung to the tune of popular folk songs.
    4. The finance minister, a secret shower-singer, is thrilled and immediately begins to belt out budget allocations in A-minor.
    5. The amendment passes by one vote, thanks to a swing voter who misunderstood the motion, thinking it was about free concert tickets.
    6. The session escalates as a counter-amendment is suggested: All sung speeches must be auto-tuned for fairness.
    7. Chaos ensues when the auto-tune machine malfunctions, making everyone sound like rogue robots.
    8. In a bid to shut down the malfunctioning device, a scramble erupts, with one member accidentally broadcasting his vocal warm-up scales live on national television.
    9. The image captures the exact moment when the tech-savvy intern is summoned to fix the auto-tuner, while the rest try to cover up the incident by looking deeply concerned about fiscal policies.

    Try How We Got Here with your own photos.

  • Cruising For Love

    Can you find your soulmate before the ship returns to shore?

    Play Now with ChatGPT Plus

    After making the sci-fi adventure Chrono Quest, I thought my next GPT game should be all the way in the other direction, so Cruising For Love is a bit of a rom-com dating sim set on a cruise.

    You are on a five-day cruise to try and find romance. You should have a new experience each time you play: new destinations, new activities, new people to meet, and hopefully new breakfast items at the buffet restaurant. You don’t have to tell the game your gender or what kind of person you’re into, but it doesn’t hurt.

    You can simply play it like a choose-your-own-adventure game and pick from the multiple choices given to you at the end of each turn, or take the keyboard and start providing more detail about where you’d like to take things. You can double-time/triple-time, play hard to get, take someone shopping for diamonds, reveal your secret magic skills, or try to seduce the captain. Nearly anything you can dream of, as long as it’s related to finding love.

    You may or may not encounter some surprises along the way, making your successful pursuit of a love interest not exactly a given. So turn on the charm, put your leisure suit on, and start cruising for love!

  • Week 50.23

    Week 50.23

    Christmas is creeping closer, but the Goodreads Challenge angel won’t be darkening my doorstep as I’ve redeemed myself with two weeks to go! James Hogan’s Thrice Upon A Time was the twelfth book of my year, and definitely one of the better ones. It’s a 1980s time travel story where no time travel takes place, but it grapples with ideas about how timelines are rewritten, plus some other global topics that seem quite prescient when read today. Stylistically, it’s aged, but in that classic sci-fi way I love, which takes me back to reading books in the library after school. I think those hours, that precious access back then to a ton of books I couldn’t wait to read, were the part of going to school I looked forward to most. Anyway I’ve started a dumb new book that I should be finishing this year for bonus credit: The Paris Apartment by Lucy Foley.

    If you’re looking for reading material, it may interest you to hear that I somehow managed to finish B’Fast, the AI-generated breakfast zine project I mentioned last week. The InstaZine GPT I made to create the content is also available through the same link (I updated the page with some additional usage tips today). Now that it’s done, I’m planning to make a companion breakfast-themed zine called “B’Fast (Brandon’s Version)” which will be made entirely by me, in a way that an AI presumably would not. But probably not straight away.

    ===

    Earlier this year, Hipstamatic redesigned and relaunched their Hipstamatic X app. The “X” was dropped, and they added a new social feed. It was the official replacement for their original app which became Hipstamatic Classic. Where the original was funded by in-app purchases for new filters (at a pace of roughly one new 99c release each month), the new Hipstamatic charges a $30/year subscription, doubling their income from faithful fans.

    I used the new app for some photos during my trip to Japan and mostly enjoyed the experience, but it was too buggy and the UI was still too cluttered and confusing (a longstanding problem with the original Hipstamatic app as well) for me to consider continuing with a paid subscription.

    Their main problem is that there are now over 300 filters in the forms of “films” and “lenses” and “flashes” that you can combine to make infinite looks, and no good way to make the attractive human-curated combinations they recommend accessible and discoverable. In the last version, they tried to give a few of these combinations the tangible metaphor of being unique “cameras”, each one with a different skeuomorphic body you picked off a shelf, but essentially they were presets you could call up. But you can keep, what, ten of these aside in a little drawer before you couldn’t tell them apart? And so many of the other combinations were left out of sight and out of mind.

    Now, after a week of teasing social media posts, wherein a “physical camera” was shown in videos — quite obviously a 3D model rendered in AR, but some people believed they were going to release a hardware product anyway — they’ve released a major update (v10) that tries to untangle the Gordian knot of their UX issues.

    In this new version, they’ve tried to marry what worked in the original app with a new info architecture and set of metaphors to manage the library of looks they’ve accumulated over the last 14 years. You get just ONE skeuomorphic camera to call your own and customize the look of, and this camera is capable of loading up many presets. You can either let the camera detect the scene and choose a suitable preset for it (Auto mode), or specify the preset yourself (Manual mode). There are 9 possible scenes, such as Travel, People, and Still Life, but in a puzzling and unfortunate move, when you start using the app, each of these scenes has just one or two associated presets. That means you’re going to see the same looks over and over, when there are over a hundred more hidden away in a long list. This was presumably done to allow you, the user, to customize your experience and assign your favorite presets to the scenes.

    There are two major problems at this point. One: leaving it up to the user to gain their own understanding of all the pre-existing “good combos” and assign them to 9 scene categories is insane. It’s a lot of work to hand off to a customer you hope will pay you money. The team should be doing the work of tagging each preset combo with a recommended use case, AND making it easy to assign them. It’s not currently easy. I had to move back and forth between two sections of the app looking at presets and memorizing their names to go assign to a scene, because these things aren’t placed together. Off the top of my head, it just needs an in-line list of suggested presets (from the aforementioned tagging exercise) on the same screen where you customize a scene’s presets. Perhaps this is coming. I’d argue it should have been in the MVP release of such a big redesign.

    Two: as I mentioned, there are infinite possible presets given the number of ingredients they’ve accumulated. You can make your own combos, but there’s no great way to experiment and do this — there should be a sandbox where you can explore each lens/film/flash’s characteristics and try them out in real time to find a good combo. There used to be a section of the app called the “Hipstamatic Supply Catalog” where you could browse all these effects (it was only like a static magazine, but they could have made it interactive), and this now seems to be gone or I can’t find it anymore in the maze of menus and buttons. Perhaps they’re okay with most users just using the curated “good presets” and never making their own, but it seems like a missed opportunity.

    I was feeling a mix of optimistic and bored, so I paid for an annual subscription anyway and will be trying to take lots of everyday silly snaps with this, and maybe even use it on my upcoming trip to Thailand. But if you know someone who works at Hipstamatic, please talk to them about taking on some external advice.

    ===

    • I finished watching Pluto on Netflix. It’s still a strong recommendation for me; a modern anime made with classic sensibilities and a story that really keeps you guessing. It’s also a very different Astro Boy story, suitable for people who hear “Astro Boy” and think it’s stuff for kids.
    • We started watching A Murder at the End of the World and I’m really liking it so far. Especially its star, Emma Corrin, who I’ve never seen in anything else before. They’ve got the most strikingly similar face to Jodie Foster, I was sure they were related.
    • New playlist! BLixTape #3 is done, made up of mostly new songs that I’ve been listening to since mid-October. Add it here on Apple Music.
  • Zine: B’Fast

    I mentioned last week that I’d started making a zine about breakfast using AI tools to write all the text and make the pictures, leaving the final job of laying it out to my own (inexperienced) human hands.

    Well, here it is. It was a lot of fun to put together, and I got a crash course in print design while trying to evolve it from a word processor document into something a little more creative and fun. You can sort of see the gradual improvement from beginning to end, as things become a little neater and more coherent — much like the mind in the morning as you make your way through breakfast.

    I’d say the final content is 98% AI generated; I made some cuts and changed just a handful of specific phrases: in one place to avoid possible offense, and most of the others were in a poem at the end, to make it suck less.

    I did it with the help of my custom GPT: InstaZine, which I encourage you to check out if you want to do something like this. It will brainstorm a bunch of different article ideas for your approval, then write them in a range of different authorial voices, hopefully giving the final product more diversity and interest than if you just did it the normal way with ChatGPT.

    You can read the zine below in the embedded Issuu viewer, or download the PDF. Let me know how you like it!


    Some additional tips for using InstaZine: Start by telling it what the subject of the zine should be, and describe what it should be like if you know. Ask it to create a list of article ideas — it should suggest a bunch of titles with a short abstract of each one, usually with a fictional author’s name and some description of the style and approach it will take. If you’re happy with this list, you should copy it off to the side in your own notes and play these descriptions back to the GPT before asking it to write each one. The reason is… the context window still isn’t great with ChatGPT as of Dec 2023, and it won’t necessarily do a great job if you don’t remind it of the brief after 20 messages of writing other articles and generating the images for them. By default, it creates colorful minimalist illustrations suited for a certain kind of magazine I like, but you should override this with whatever style you want.

  • Week 49.23

    Week 49.23

    As usual, I find myself in disbelief that another year is nearing its end. My Goodreads Reading Challenge count stands at 11 out of 12, and I’m halfway through a book right now, so I guess I’ll just make it before New Year’s. Which, incidentally, I’ll now be spending overseas thanks to some last minute plans. I’ll say where and post some photos after I’m back.

    On reflection, it’s a bit of a shame that almost all the books I’ve read this year were just 3-star affairs. It’s like I’ve held back from tackling the big names on my reading list, choosing lighter and more inconsequential fare. In some ways, this has been a calmly chaotic year, with instability in the wider world putting everyone on edge, and that may have influenced my need for soothing, low-stakes entertainment. I saw a mention somewhere that the self-care industry is “sedating women”, making them focus on trying to fix something in themselves instead of fixing the problems out there. I can relate.

    The holiday overeating has begun (although I may have forgotten to stop after last year), which I think is linked to a feeling of letting go and treating yourself in the evenings as work slows down (or seems less important) at this time of the year. We ended up eating out a fair few times, and as I write this I’m looking forward to another trip to Maji Curry this evening.

    It’s not just fat cushioning my bones — while at Tokyu Hands this week (now simply called Hands), I saw a $75 wavy seat cushion and decided I had to have it for all the sitting around I do when working from home. Does it do anything for me? I don’t really know! But I’m treating myself. And then on the weekend we wandered into some kind of fancy organic bedding store and walked out with a pair of new pillows. Kim unfortunately may have chosen the wrong height/density for her sleeping style, but after one night I can cautiously report that mine cradles my noggin just fine.

    ===

    Where’s the usual AI garbage, Brandon? I can hear you thinking it! Well okay, so Peishan mentioned she’d made a new zine, which reminded me of a project idea I’d written down and filed away. It was to make a zine on the subject of “Breakfast”, but using only AI-generated words and images.

    If you’re thinking that sounds like a pretty mediocre zine, then you understand the challenge here. We’re now at a point where generative AI’s infinite supply threatens to drive down the perceived value of all but the best; content vs. art. So I’d like to see if my human labor of directing an AI worker to deliver above-average quality and packaging its output as a coherent product, can create something worth looking at. The only way to find out is to make it! And now that we can do custom GPTs, I decided to start by making one that acts like a diverse team of writers and artists, with a range of different styles, which can then be applied to a zine on any topic you like.

    I’m still testing it out, but so far I’ve gotten a handful of articles. And in doing so, I’ve realized that I know nearly nothing about print layouts or how to design an attractive zine. I’ve read my share of mags, of course, but without effectively taking in their details. I’m making it with Pages on my Mac, and using its “Free Layout Mode” has been the best approach I’ve found. It’s sort of like a digital version of making a physical zine: I’m moving chunks of text and cut-out imagery around on A4 canvases; almost like scrapbooking. I just need more fonts and more imagination and more time.

    ===

    • I listened to no new music this week.
    • I didn’t turn my Switch on once.
    • I haven’t seen any films.
    • We did start Season 2 of Bosch Legacy though, and that’s still as great as ever. Not just the modern noir vibes and great jazz soundtrack. It’s a show that respects its audience and their time, without overelaborating on plot points or explaining every term or acronym that comes up. We’re already on episode 7 of 10, and I’ll be sad when it’s over. Thankfully a third season has been confirmed!

    (This week’s featured image was taken while walking around Tiong Bahru, edited with a Ricoh GR Positive Film effect simulation preset I made.)