Category: General

  • Fixing freezing and battery drain issues on modern Kobo readers such as the Clara Color

    How to solve a problem where a Kobo ereader will hang or freeze or completely drain to 0% battery life overnight

    I’m posting this in the hopes of helping anyone with the same issue.

    I own an older Kobo Libra H2O model and never had any problems like this, but it happened often with my new Kobo Clara Color. I would pick it up in the morning and find the battery flat. Or I’d be using it and it would lock up and require a hard reboot. It would also lose my reading position in a book when this happened.

    The reason (at least for me; your mileage may vary) can be traced back to ePub files that contain errors such as improper markup or missing font specifications. This can happen if you’re loading your Kobo with ePub files other than those sold through the Kobo e-store, such as privately authored files or converted documents.

    You can address these errors in the free open source book management app, Calibre. Select a book in the main list, then press “T” or choose the menu option to “Edit Book”. Then look in the toolbar for an icon that looks like a bug, which might be labeled “Run Check”. This will scan the ePub file for errors, and offer to fix them automatically. Most of the time, this will solve all problems. Occasionally, a file may have problems that can’t be fixed, in which case you may have to delete or edit some HTML elements yourself, if you know how. When you’re done, press the equivalent of ‘Cmd-S’ to save the changes, and close the window.

    Finally, you’ll want to use a special Kobo-optimized form of the ePub format called KePub (or Kobo ePub) when transferring the files to your Kobo. This format is apparently fully compatible with all ePub readers, but offers enhanced performance when used with Kobos, so you may notice page turns and searches are faster.

    In Calibre’s settings, go to the Plugins page and find the option to install external plugins. Search for KoboTouchExtended and install that, then configure it to automatically convert books before sending them to your device. You can refer to this Reddit thread and the links within it if you run into any trouble.

    If you don’t use Calibre, there are standalone tools for converting ePub files to KePub such as this web-based one which does everything in your browser — no uploading to servers involved.

    Ever since doing this a couple of weeks ago and transferring all my books over again, I haven’t run into this issue despite using it quite a bit.


    Summary

    Problem Cause: ePub files with errors, such as improper markup or missing font specifications, can cause freezing, battery drain, and reading position loss on Kobo Clara Color.

    Solution: Use Calibre to check and fix ePub file errors, then transfer the files to Kobo using KePub format for enhanced performance.

    Plugin Installation: Install the KoboTouchExtended plugin in Calibre and configure it to automatically convert books to KEPUB format before sending them to your Kobo device.

  • Fixing an iOS/iPhone issue where apps don’t save photos with the original date and time metadata

    How to solve an iOS/iPhone bug that prevents third-party editing apps (e.g. Lightroom, VSCO, AgBr) from retaining the date/time metadata from the original photos when saving new edited files

    I’ve tried to pack the right terms in the title and heading above in the hopes that you’ll find this page if you have the same problem that I did.

    For several years, edited photos saved from apps like VSCO wound up with the “current” date and time as the “capture time” on my iPhone, meaning that they don’t sort chronologically alongside the original photos when viewing the Photo Library. I noticed with one app that when I made edits on my iPad, it worked as expected, i.e. the original capture time was retained, but mistakenly believed it was a bug in that specific app that only surfaced in their iPhone version.

    Last week I discovered this behavior in the new AgBr app, and got to emailing with the developers. In that process, I realized it was a problem with my iPhone, and most likely down to a bug in iOS rather than the photo editing apps.

    Without further ado, the solution is to “Reset All Settings” in your iPhone’s Settings app (under General → Transfer or Reset iPhone). This will undo many of your longstanding settings, such as known WiFi networks and cards in your Apple Wallet. It’s a pain in the ass, but much better than wiping the entire phone and starting over from scratch, which is what I thought I would have to do.

    I have no idea when I picked up this problem, but it’s a deep-seated one that has followed me across several iPhones, resurfacing with each migration and “restore from iCloud backup”.

    Funnily enough, I chanced upon the solution while whining about this problem to Michael, and while I was celebrating with way too much joy than fixing a computer problem should give someone, he pointed out that just a few weeks earlier he had also solved a longstanding problem with Safari on his Mac while telling me about it. I think we’ve figured out a winning formula.


    Summary

    Issue Description: Edited photos saved from third-party editing apps on iOS/iPhone lose the original date/time metadata, resulting in incorrect chronological order in the Photo Library.

    Cause: Likely a bug in iOS.

    Solution: Reset All Settings in the iPhone’s Settings app (under General → Transfer or Reset iPhone), though it will reset some settings.

  • An image that came to mind while thinking about Submerged

    An image that came to mind while thinking about Submerged

    I probably didn’t manage to close the loop on Submerged in yesterday’s update. The experience of watching a film in a blacked-out space where your own body isn’t visible, and it envelops your senses so completely that it becomes the entire world, and it’s shot like a traditional film with changing camera angles and the buildup of suspense and you’re just forced along for the ride — this powerlessness is movie viewing turned up to 11. You’re inside the director’s vision and can’t really look away.

  • Strange Beach

    Thanks for coming to my Midjourney art exhibition:

    Edit: I’m currently in the National Gallery of Victoria in Melbourne and am reminded that most exhibitions also have descriptions for kids. So I’ve asked GPT-4 to expand on the wall text it helped me with and write a version for kids, which I’ve appended below.


    Strange Beach presents a provocative exploration of the uncanny, executed through the fusion of AI-generated imagery, Japanese anime aesthetics, and elements of surrealism and horror. Drawing on theoretical concepts associated with the 1920s Surrealist movement, this collection explores the destabilizing effects of disrupting familiar contexts and spaces. Surrealist influences, suggestive of Salvador Dalí’s dreamlike landscapes, are observable, yet the visual language is distinctively rooted in the tropes and stylistic conventions of anime, echoing the complex, often boundary-blurring narratives found in Satoshi Kon’s filmography.

    Within the context of Strange Beach, the typical Hawaiian-style beach — a common setting within anime — is reinterpreted. The injection of elements that challenge the norms of reality introduces an unsettling quality, resonating with the Grotesque tradition in art history that dates back to the Renaissance. The human figures, manipulated and distorted, bear stylistic similarities to the disquieting characters found in Junji Ito’s horror manga. The images, while unsettling, offer an invitation for viewers to question and reinterpret their traditional understanding of serene landscapes, provoking contemplation on the fluid boundaries between normality and the strange.

    For kids

    Welcome to “Strange Beach”! Have you ever imagined a sunny beach with surprising and weird things happening, like in a dream? Well, that’s what you’re going to see here. This art looks like Japanese cartoons, or ‘anime,’ but has been created by a computer!

    In these pictures, you’ll see a beach that might remind you of your favorite anime show. But look closely, because things are a bit strange. The people might look a bit like ghosts, or their bodies might look different than what you’d expect. It’s a bit like when you have a dream, and things seem a little odd or mixed up. It’s fun to think about what’s happening in each picture. So let’s go exploring and see what interesting things we can find on our “Strange Beach”!

  • [H-AI] A Nostalgic Leap: From Nintendo 3DS to Apple Vision Pro

    The following is a post written by GPT-4 given a detailed brief by yours truly. The [H-AI] tag in the title declares this as human-supervised AI content.

    In the tech world, it’s easy to get caught up in the latest and greatest. But sometimes, it’s worth taking a moment to look back at the innovations that brought us here. Today, I’m talking about the Nintendo 3DS and its 3D camera, a feature that seems to have found a spiritual successor in Apple’s newly announced Vision Pro headset.

    The 3DS, launched in 2011, was a marvel of its time. It brought 3D gaming to the palms of our hands, no goofy glasses required. But the real kicker was its 3D camera. With two outer sensors capturing slightly different angles, it could take 3D photos and videos. It was a novelty, sure, but it was also a glimpse into the future.

    The 3DS’s 3D camera was met with a mix of awe and skepticism. Some saw it as a gimmick, while others reveled in the new dimension it added to their photos and videos. Regardless of the reception, it was a bold move by Nintendo, a testament to their innovative spirit.

    Fast forward to today, and we see Apple taking a page from Nintendo’s book with the Vision Pro. This VR headset lets you relive moments in dramatically higher resolution — a step up from the 3DS, but the core concept remains the same.

    The 3DS may be discontinued, but its legacy lives on. It was a pioneer in 3D technology, a stepping stone to the immersive experiences we see today. As we anticipate the release of the Vision Pro next year, let’s not forget the devices that paved the way.

    So here’s to the Nintendo 3DS, a trailblazer in its own right. And here’s to the Apple Vision Pro, a testament to how far we’ve come. The future of XR technology is bright, and I, for one, can’t wait to see what’s next.

  • Week 21.23

    It was one of those weeks where not an awful lot happened outside of work. I don’t talk about work here but let’s sort of circle it.

    Reflections on AI

    One thing I can say is that I started making a presentation deck about the use of generative AI and GPT in design, initially to share my findings so far but increasingly as an exercise in structuring my thoughts into anything at all useful.

    A couple of things, on reflection: an AI assistant or collaborator poses significant risks for a lazy human in certain tasks since it tempts us to quickly accept its output without evaluating potential improvements. Assuming AI can do a job to 90% of a human’s rigor and quality, figuring out what the other 10% is without having done the same work yourself is quite the challenge. So the efficiency gains may not be as significant as you think, not until we figure out some smarter processes.

    An example of what I mean: you can feed ChatGPT with notes from an interview conducted with a customer about their experiences and how a product fits into their lives. Supply enough interviews, and ChatGPT can do the work of summarizing them in aggregate, presenting key themes and areas worth looking into, like maybe everyone thinks the price is too high, but it’s because they don’t fully understand the value of what they’re buying.

    It can create a bunch of frameworks to illustrate these findings, like personas and service blueprints. And it can even suggest solutions, like better marketing materials to explain this value to customers. The AI’s output might look pretty good, similar to what a team of human designers would (more slowly) produce, and a company might be tempted to make business decisions based on it. In fact, a team of human designers who haven’t read the interview notes themselves or thought deeply about it might also look at the AI’s work and say it’s good to go.

    The level of confidence and finality inherent in these AI outputs is incredibly convincing. But if a human were to go through all the interviews, listening to the recordings perhaps, they might realize there was a missing element, a feeling subtly unsaid on the margins, that means some customers do see the extra quality, they just wish there was a cheaper version of the product that did less. Skimming through the finished research report crafted by the AI, you wouldn’t even begin to guess where in the sea of correct conclusions this exception could be hiding.

    But there’s no question that this stuff is ready today to do some tasks like image editing, seen in Photoshop’s impressive beta release of a “Generative Fill” feature this week. I took a stock photo and doubled its height, and it was able to get the context of the scene and fill in the missing ceiling almost perfectly. That would have taken an experienced image compositor at least a few minutes, and anyone else way too much time. Just a couple of clicks now.

    I also looked into what Adobe is building with its Sensei suite of AI marketing tools, and that dream of generating and sending personalized ads, as in a unique package of art and copy tailored to a single customer’s behavior, would seem to be already here. I’m not 100% sure how this works everywhere, but in the past, you’d still need copywriters and art people involved in the process after marketers had identified the “customer journeys” and content-goes-here templates. With the opportunities now being identified, advertising messages crafted, and email offers all sent with a single click by the same person, there’s hardly a crack in the door left for the traditional artists and copywriters to make their case. Yet, the quality is good enough to satisfy all but the most discerning of corporations.

    You may observe that the two of the largest advertising platforms are already in this space.

    What do you think about the current advancements in AI and their implications? Share your thoughts in the comments below.

    (One more example: I asked ChatGPT to help suggest edits for this post, and it rewrote one of the above sentences to be better. I liked it, but on closer inspection, there was a glaring semantic error I had to fix myself. It also suggested the call to action above, to increase engagement. Talk to me!)

    ===

    Personal updates

    There seems to be yet another wave of Covid sweeping through the city, based on the fact that several people I know have come down with it, and every bus and train car I’ve been on this week had more people wearing masks, suggesting that they, too, know people who’ve come down with it.

    Kim is going away for a couple of weeks, and I’m hoping she doesn’t run into it out there either; one of her colleagues caught it while traveling in the region a few days ago. I’m planning to stay home as much as I can during this time, and finishing as many video games as possible.

    ===

    Media activity

    • Not a ton of progress in The Legend of Zelda: Breath of the Wild, which I’ve been playing consistently now for the past few weeks — a streak unmatched since the game first came out six years ago (I abandoned it out of fatigue shortly after that initial burst). I’ve now got all four Divine Beasts pointing at the castle and now just need to build up the nerve and arsenal to storm it and be done with this. I seem to be procrastinating instead, exploring areas in the massive world that I never checked out before.
    • The girl band I would say I’m rooting the most for in pop music, XG, performed at the Head in the Clouds festival in New York, and I watched some fancams of their set. The audio quality is terrible in all of them, so I won’t recommend starting there, but they are undeniably polished and tight as a group. Here are two music videos. I think I discovered them back in February, and at the time I said they’re gonna be monstrously huge this year. I stand by this.
    • If you watch the documentary series their label has put on YouTube, you’ll understand why they’re performing at this level: they’ve been physically and psychologically abused for the past six years of training, starting from when some of them were just 12. It’s horrendous to watch, but also probably par for the industry. While it’s good that someone decided to plainly put this footage out there, I’m not seeing much of a backlash, so it’s probably too late and already normalized. Some of the stuff their boss/producer says and does is straight up toxic emotional manipulation (he apparently came up as an idol himself so it’s like Ted Lasso says in the latest episode, hurt people hurt people).
    • Ted Lasso is almost done with its third season, one episode to go. I’m still liking it much better than season two, although it is sooo uneven and odd in its choices. You know the adage, “show, don’t tell”? It’s like the concept of season three is going against that conventional wisdom; a challenge the writing team decided to issue themselves: Can we take lots of scenes that people want to see (scenes of closure, catharsis, and vindication!) and make them happen off-camera and between episodes? And after doing that, can we still make people care through the strength of our set pieces and touching monologues? That’s the only explanation I have for what’s been going on. And to the team’s credit, it works some of the time. It’s not conventional TV, and maybe that’s the point.
    • Platonic, the new sitcommy show on Apple TV+, is much more conventional. It’s about a male and female pair of friends who are really just friends (so far), and a comparison to When Harry Met Sally is drawn in the very first episode. They had a fight and haven’t spoken in years, and then reconnect on the cusp of middle age, when it’s notoriously difficult to form new friendships, let alone platonic ones. I think the concept and set up are strong, but the execution is a little spotty. I’m not really into Seth Rogan’s work, and his character here feels exactly like what you’d expect from one of his characters, but by the end of episode 2 I think I’ll keep watching. The most jarring thing is Rose Byrne’s quasi-Australian accent which raises too many questions about how they met and got along in the past.
    • Speaking of actors whose strong accents shatter the suspension of disbelief, Arnold Fucking Schwarzenegger is back in a NETFLIX TV SERIES which sounds like a dreamy reboot of True Lies. The show is called FUBAR and it’s about a father and daughter who both secretly work for the CIA without knowing about each other’s involvement. I haven’t seen any of it yet, but I’m dying to.
    • It strikes me that in the future, one could give a crazy brief like the above to a generative AI system and start watching something like FUBAR within minutes.
    • My first music discovery of the week is Eternally Yours, a new Alphaville album that sees the band doing symphonic rearrangements of songs such as Big In Japan and Forever Young with a full orchestra. Yes, in Dolby Atmos spatial audio. This is a band that was formed 41 years ago and the lead singer’s voice is still incredible, iconic.
    • The second is Tears can be so soft, the new song by Christine and the Queens. It’s simple but surprisingly soulful, and sonically recalls Massive Attack’s best work.
  • Miss you, humanity

    Perhaps you’ve heard of what’s been going on in generative AI and thought, “I have no need to generate any pictures or professional email filler”, and dismissed the subject. Perhaps, like me, you’ve read that GPT-4 can now “pass exams” from the SATS to legal bar exams, accounting ones, and so on, without really knowing what that means. It can choose the right answer out of a list, and write little structured essays on given topics? That’s amazing, but not unexpected.

    But I’m here to tell you that I asked ChatGPT (GPT-4) a question yesterday, quite an exam question, and its answer floored me. These things are called generative AI, but that name undersells the incredible interpretive power they possess when it comes to language and writing comprehension.

    I asked it to analyze this poem by Gabrielle Calvocoressi, one of the most devastating things I’ve read in the past year: Miss you. Would like to grab that chilled tofu we love. Go read it, I’ll wait. Get some tissues.

    I asked it to explain what the poem was about, and call out what it’s doing as a piece of writing to achieve its emotional impact. I’m not aware that any similar analysis of this poem exists online, if you’re wondering. My thoughts going into this: I know these AIs can write poetry, but I haven’t seen anything great. They must understand unstructured or unusually structured language to some degree, but can it “read” poetry from a human point of view? Probably not? Will it really understand what this is implying: the loss, the empty spaces, the long journey, the light body? And now I realize how dumb I was, because clearly it’s “read” thousands of books of and about poetry, and literary criticism, and style, and should know a lot more than the average person.

    Here’s its answer, unedited. Good enough to pass any high-school literature class, if not any blind test of humanity.

    ===

    This is a deeply emotional poem that appears to be exploring themes of grief, loss, longing, memory, and the power of shared experiences. It is presented as a monologue, presumably from the perspective of the speaker who is missing someone who is now absent. It’s filled with a sense of yearning, a desire for the return of the past and the presence of someone who is deeply missed.

    The poem starts off with the speaker expressing their indifference to the physical state of the missing person, suggesting that they would be happy even if the person could only present in their ‘light body’. This term could have different interpretations but it seems to suggest a spiritual or ethereal form, possibly indicating that the person is deceased or in a state where they cannot physically be present.

    The middle of the poem is filled with reminiscences of shared experiences. The speaker misses the shared rituals and discussions about mundane things such as the type of chilis used in a dish of tofu. This highlights the value of seemingly trivial shared moments in relationships and their ability to evoke strong emotions when remembered. The speaker also mentions watching the person on screens and wishing they could get out of the hospital, which suggests a period of illness or suffering that the person had to endure.

    Towards the end, the speaker talks about being willing to order their shared favorite dish and eat it alone, even though it seems painful for them. This can be seen as an attempt to honor their shared memories and perhaps a step towards acceptance of the loss.

    The speaker also acknowledges not knowing the journey the absent person might be on, whether it’s long or not, and expresses a wish to know more about their current state – what they’re reading, if they’re reading, which suggests a longing for a continuation of their shared intellectual and emotional connection.

    Throughout the poem, the simple language and the direct, honest sentiment adds to its emotional power. The repetition of “Miss you” and “Wish you could” emphasizes the sense of longing and loss. The specific details of shared experiences and the mundanity of their shared conversations humanize the relationship, making the emotions more relatable and poignant.

    Overall, the poem is a moving exploration of love, loss, and the power of shared experiences. It speaks to the human condition of missing loved ones, the difficulty of letting go, and the beautiful yet painful memories that bind us to the past.

  • Hipstamatic returns

    You may have heard that Hipstamatic X has relaunched with a social network, and is now simply the de facto Hipstamatic app (the original is available as Hipstamatic Classic); completely ad free and user supported via an optional subscription, with a focus on very 2010 principles such as posting square photos only, a 99-person follow limit, and heavy filters that seem at odds with current aesthetic trends.

    I’ve been in the beta for awhile but found it too similar to the original Hipstamatic X, and the added social network had little utility for me when it was in beta. Now that it’s out, and I’m in Japan on holiday, I’ve found it a fun nostalgic toy that recalls earlier trips when my iPhone was my main camera and I’d occasionally risk losing a few shots by choosing Hipstamatic over the regular camera app. That was back when the processed photo was all you got — these days the processed photo can be reverted to the original underneath, which is quite liberating.

    I’ve been shooting regular old photos with my Ricoh GR III, which left my iPhone 14 Pro with an ambiguous role: better than nothing in a pinch but not good enough yet to rival an APS-C sensor, even with computational smarts, or perhaps because of them? So many photos look artificially sharp and HDR-like by default and don’t capture the mood accurately. I know it’s tuned on what most people want from a photo (brightness). But with Hipstamatic (and a complication shortcut on the Lock Screen that launches it immediately), the iPhone suddenly feels like a very different tool.

    Hipstamatic acts like an intentionally inaccurate camera. Its lurid colors add a veneer of personality to mundane scenes, and if lucky, or carefully prodded via the paid darkroom editing mode, enhance good compositions and subjects by catapulting them into an attractive un-reality. They’re (somewhat) like William Shatner acting, like The Darkness’s I Believe In A Thing Called Love, like George Miller and Margaret Sixel going HAM with Mad Max: Fury Road. But the stakes are low. You just have to snap away and see where it takes you. It’s the very opposite of a GR or Q in your hand. Nothing matters except having some stupid fun — and if you care about the network, posting them up to see if anyone will put a skeuomorphic “yummy” or “that’s fire” stamp on the back of your virtual print.

    Jose made a keen observation when I told him it was back and I was enjoying it. He said the original Hipstamatic was novel because its frames and filters were a throwback to analog prints and toy cameras. And now in 2023, it’s a throwback to the throwback that we’re enjoying.

    Here are some photos so far.