• It is an easy idea to mock: “Netflix is trying to become Blockbuster thirteen years after killing it”, etc. I think this could be a smart move though, even if these locations turn out to be not much more than upscale, well-run theaters.

    Americans, on average, visit movie theatres once a year. Alamo Drafthouse, with only fifteen locations outside of Texas, has built an outsized reputation by delivering a superior moviegoing experience compared to traditional theater chains. I don’t think it is a stretch to imagine Netflix achieving a similar status. Their goals are larger than simply building a theater though.

    Andrew Liszewski, The Messenger:

    Netflix plans to open retail destinations where fans of the company’s most popular streaming series can buy merchandise, dine on themed food and even partake in unique experiences, like a Squid Game obstacle course or a visit to the Upside-Down […]

    Slated to open sometime in 2025, the new venues will fall under the name “Netflix House” and will be the company’s first permanent locations […]

    “Rotating installations,” a mix of both casual and high-end food offerings and even “ticketed shows” will encourage fans to return to the venues frequently.

    Netflix’s prospects were looking grim for a while but I am increasingly convinced that if there is eventually only going to be one streamer standing, it will be them.

  • Emilia David, The Verge:

    Getty Images is partnering with Nvidia to launch Generative AI by Getty Images, a new tool that lets people create images using Getty’s library of licensed photos.

    Generative AI by Getty Images (yes, it’s an unwieldy name) is trained only on the vast Getty Images library, including premium content, giving users full copyright indemnification. This means anyone using the tool and publishing the image it created commercially will be legally protected, promises Getty.

    I last wrote about Getty back in February when they filed a lawsuit against Stability AI at the same time their largest competitor, Shutterstock, announced their own image generation service. I was in favor of their strategy at the time. Generative AI presented a clear opportunity for differentiation. It seemed as though they were positioning themselves to be stringent supporters of human-made art:

    My knee-jerk reaction is to say that Getty is behind the times here but, after thinking about this a little bit more, I am less sure about that.

    If Shutterstock starts re-licensing AI generated images, why would you pay for them instead paying of OpenAI or Midjourney directly? More to the point, why not use Stable Diffusion to generate images, for free, on your own computer?

    Getty Images, on the other hand, gets to be the anti-AI company selling certified human-made images. I can see that being a valuable niche for some time to come.

    Do you just need an obligatory feature image to slap on top of your SEO-bait blog post? Go to Shutterstock or DALL-E or any of the hundreds of fly-by-night AI image generation services. If you want to Support Human Artists, however, Getty is the only place to go.

    With today’s announcement Getty has abandoned their opportunity for differentiation. I’ll be interested to see who steps up to fill that role.

  • Jennifer Pattison Tuohy, The Verge:

    At its fall hardware event Wednesday, [Amazon] revealed an all-new Alexa voice assistant powered by its new Alexa large language model. According to Dave Limp, Amazon’s current SVP of devices and services, this new Alexa can understand conversational phrases and respond appropriately, interpret context more effectively, and complete multiple requests from one command.

    I wrote about how bewildering I found it that no major company had integrated large language model technology into their voice assistants back in January. January!

    It’s the APIs that are key, says Limp. “We’ve funneled a large number of smart home APIs,  200-plus, into our LLM.” This data, combined with Alexa’s knowledge of which devices are in your home and what room you’re in based on the Echo speaker you’re talking to, will give Alexa the context needed to more proactively and seamlessly manage your smart home.

    The big difference between conversational AIs like ChatGPT and traditional voice assistants is that the later has to interact with the outside world. If the new Alexa can’t turn on my lights and set timers that will be a regression. This sounds like it will use a ReAct pattern, which is a smart approach, but only time will tell how solid it actually is.

    Ultimately, I think the reason that more companies haven’t yet introduced LLM-backed assistants is that they are an expensive replacement for a technology end-users have traditionally gotten for free. Amazon seems unsure about how they will handle that.

    Limp said that while Alexa, as it is today, will remain free, “the idea of a superhuman assistant that can supercharge your smart home, and more, work complex tasks on your behalf, could provide enough utility that we will end up charging something for it down the road.”

    Ironically, the two uses of the word “super” in the previous quote don’t inspire much confidence.

  • Samuel Hughes:

    Architecture is a public art, a vernacular art, and a background art: it is created by a huge range of people, and experienced involuntarily by an even wider one. This means that we need architectural styles that are as accessible as possible, to the full range of people who live with what we build, and to the full range of builders who create it.

    We enjoy creating things we can inhabit. We build blanket forts as soon as we can crawl. Later, we graduate to making treehouses and stick tepees in the summer and igloos in the winter. Failing that, we build things we imagine we could inhabit: Lego towers, sand castles, doll houses, vast Minecraft fortresses…

    Once we reach adulthood, architecture—building structures—becomes inaccessible to all but the select few that have chosen it as a profession.

    I’ve spent this summer building a small greenhouse in my backyard. It has been immensely satisfying to simply open its door and watch the way sunlight plays over its interior. To cheer it on as it withstands heavy wind gusts. To shelter under its roof during rainstorms. To know how to repair it because I put in all of the screws to begin with.

    But my little greenhouse is also totally prohibited! I didn’t obtain a permit from my city before I started construction. At any time my local building department could mandate that I take it all apart and then fine me for the trouble. Of course, building codes and permitting serves a valuable function. Nonetheless, we need an outlet for amateur architecture because the desire to build doesn’t die after childhood.

  • Matt Webb continues to write the internet’s most thought provoking meditations on AI:

    If we are going to have AIs living inside our apps in the future, apps will need to offer a realtime NPC API for AIs to join and collaborate […]

    You create a “pool” or a cursor park or (as I call it) an embassy on the whiteboard. The NPCs need somewhere to hang out when they’re idle. […]

    NPCs can be proactive! The painter dolphin likes to colour in stars. When you draw a star, the painter cursor ventures out of the embassy and comes and hovers nearby… “oh I can help” it says. It’s ignorable (unlike a notification), so you can ignore it or you can accept its assistance. At which point it colours the star pink for you, then goes back to base till next time. […]

    Cursor distance = confidence. When an NPC wants to be proactive, it can hover nearby. It can be pushy when it knows it can help. (It can remember not to pipe up again if it is banished.) There’s a lot of resolution to explore here.

    Visual interfaces need a ‘suggestion language’ which is as good as ghosted text is for autocomplete.

    Chat is a language model’s terminal interface—a critical affordance when low level input is required but a poor choice when discovery and intuitive ease of use is a priority.

  • Ina Fried, Axios:

    Google plans to overhaul its Assistant to focus on using generative AI technologies similar to those that power ChatGPT and its own Bard chatbot, according to an internal e-mail sent to employees Monday […]

    As part of the move, Google is reorganizing the teams that work on Assistant… The move will involve eliminating dozens of jobs, Axios is told, out of the thousands of employees who work on the Assistant

    Google Assistant isn’t as embarrassing as some of its competitors. Still, it was shocking to read that “thousands of employees” work on Assistant.

    I’ve long bemoaned the fact that Google, Apple, and Amazon haven’t incorporated generative AI into their legacy “AI assistants.” Microsoft replaced Cortana with Bing AI a couple of months ago but I am not sure anyone ever used Cortana to begin with.

    This is clearly a step in the right direction from Google. Apple and Amazon appear to be on a similar path themselves. It will be interesting to look back at the state of the assistants this time next year.

  • It is not the most creative name they could have chosen, but Meta released a successor to their open source “Llama” language model yesterday.

    Meta:

    We’re now ready to open source the next version of Llama 2 and are making it available free of charge for research and commercial use. We’re including model weights and starting code for the pretrained model and conversational fine-tuned versions too.

    Unlike the original Llama release, Meta took the extra step to license this new model for commercial use.

    As Satya Nadella announced on stage at Microsoft Inspire, we’re taking our partnership to the next level with Microsoft as our preferred partner for Llama 2 and expanding our efforts in generative AI. Starting today, Llama 2 is available in the Azure AI model catalog, enabling developers using Microsoft Azure to build with it and leverage their cloud-native tools for content filtering and safety features. It is also optimized to run locally on Windows, giving developers a seamless workflow as they bring generative AI experiences to customers across different platforms.

    Just unbelievable positioning from Microsoft. Now, not only is their infrastructure powering all of OpenAI’s models, they are now working with Meta to support the leading alternative to OpenAI.

  • Ernie Smith:

    The thing that I think made the internet such an interesting place in its early years was because it didn’t feel like a controlled environment. The chaos was everywhere. It was messy. It was grimy.

    […]

    I will not say that this was perfect, but the chaotic effect was interesting, and interesting was often enough to continue using, because it meant there were always new surprises. For lots of people, chaos often breeds new ways of thinking.

    […]

    Threads threatens to be social media’s Disney World.

    Disney World has its place but I am more interested in the ragged edges, the avant-garde, social media’s… Chicago? But also the contemplative, slow, and deliberate—Lancaster?

    Long live the open web.

  • Madhumita Murgia, Financial Times:

    Greg Marston, a British voice actor with more than 20 years’ experience, recently stumbled across his own voice being used for a demo online.

    Marston’s was one of several voices on the website Revoicer, which offers an AI tool that converts text into speech…

    Since he had no memory of agreeing to his voice being cloned using AI, he got in touch with the company. Revoicer told him they had purchased his voice from IBM.

    In 2005, Marston had signed a contract with IBM for a job he had recorded for a satnav system. In the 18-year-old contract, an industry standard, Marston had signed his voice rights away in perpetuity

    The problem isn’t AI here. The problem is that it is possible—and, apparently, standard—to sign vital rights away to companies.

    Not having full license over your own voice, as a voice actor, is ridiculous. It is unconscionable that we have allowed conditions to develop such that it has become an accepted part of the occupation.

    Pavis [a lawyer who specializes in digital cloning technologies] said she has had at least 45 AI-related queries since January, including cases of actors who hear their voices on phone scams such as fake insurance calls or AI-generated ads.

    Okay, AI voice synthesis companies definitely hold some blame here. Generating a new, a non-specific, synthetic voice is one thing, cloning an individual’s unique voice is something else altogether.

  • Google is no longer working to build an augmented reality hardware platform. They will be shifting their energy towards creating AR software instead. It is hard to believe this wasn’t at least partially prompted by the Vision Pro.

    Hugh Langley, Business Insider:

    Google killed off a project to build a pair of augmented-reality glasses it had been working on for several years.

    […]

    The glasses, known internally by the codename Iris, were shelved earlier this year following layoffs, reshuffles, and the departure of Clay Bavor, Google’s chief of augmented and virtual reality, according to three people familiar with the matter.

    […]

    Since shelving the Iris glasses, Google has focused on creating software platforms for AR that it hopes to license to other manufacturers building headsets… One employee described Google’s new ambition as being the “Android for AR“

    Of course they should build “Android for AR” and sell it to whoever is interested but they shouldn’t let that get in the way of developing great first party applications for all headset platforms.

    The advantage of giving up on the hardware market is that they don’t have to weigh direct competition as heavily in their decision making.

    Meta, especially, must be thrilled.

  • Apple:

    Apple today announced the availability of new software tools and technologies that enable developers to create groundbreaking app experiences for Apple Vision Pro — Apple’s first spatial computer.

    […]

    With the visionOS SDK, developers can utilize the powerful and unique capabilities of Vision Pro and visionOS to design brand-new app experiences across a variety of categories including productivity, design, gaming, and more.

    This is all a part of Xcode 15 which you can download today.

    Playing around with the visionOS simulator is fascinating. It already exposes a lot of the final operating system—including first party applications and design elements—that I hadn’t previously seen elsewhere.

    The new Reality Composer Pro application is also more powerful than I would have expected. It feels like a stripped down version of Unity3D. I hope Apple continues development on it. I would love to see it eventually become a full-fledged 3D development environment.

  • It is important to begin by noting that I am not a vegetarian—I eat meat.

    Still, there is something undeniably strange about eating meat nowadays. I think it stems from the fact that most of us are completely disconnected from the production of the meat we consume.

    Plus, we eat a lot more meat than ever before.

    In 1960, Americans ate an average of 28 pounds of chicken, per person, each year. In 2022 it was more than 100 pounds.

    More than 70 billion chickens are slaughtered annually. To put that number in perspective, it is estimated that 100 billion humans have ever existed throughout the entire life of our species.

    Again, I don’t mention all of this to be preachy or judgmental—I eat meat and I don’t raise the meat that I eat myself. All of this is to say that there is a serious cost to the ever-increasing quantity of meat that most of us consume.

    Jonel Aleccia & Laura Ungar, AP News:

    For the first time, U.S. regulators on Wednesday approved the sale of chicken made from animal cells, allowing two California companies to offer “lab-grown” meat to the nation’s restaurant tables and eventually, supermarket shelves.

    […]

    In a recent poll conducted by The Associated Press-NORC Center for Public Affairs Research. Half of U.S. adults said that they are unlikely to try meat grown using cells from animals. When asked to choose from a list of reasons for their reluctance, most who said they’d be unlikely to try it said “it just sounds weird.” About half said they don’t think it would be safe.

    […]

    It could take a few years before consumers see the products in more restaurants and seven to 10 years before they hit the wider market… Cost will be another sticking point… Eventually, the price is expected to mirror high-end organic chicken, which sells for up to $20 per pound.

    There are still big challenges that need to be solved before cultivated meat can become mainstream. Consumer acceptance and cost are both particularly salient. At least now regulatory hurdles can be checked off of that list.

  • Billy Perrigo, Time:

    The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation.

    But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company

    The Time article above contains the entirety of a previously unreleased document OpenAI wrote for E.U. officials.

    Here is the thing, I agree with Altman that E.U.’s AI Act was too broad. That isn’t where I take issue with this.

    The problem is that Altman has been spending his time publicly lobbying for regulation when it would hurt his competitors while privately pushing for the opposite when it would affect him.

    Again, an obvious push for regulatory capture.

    OpenAI has pledged not to compete with other companies in the event they get close to surpassing their capabilities. The fear being that competitive “race dynamics” would lead to unsafe development and deployment practices.

    From OpenAI’s founding Charter:

    We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.

    This was again emphasized in the GPT-4 technical report:

    One concern of particular importance to OpenAI is the risk of racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI.

    There is the straightforward way to honor this promise: keep chugging along for now and, if a company later comes along and laps OpenAI, give up the fight fair and square.

    I think Altman’s actions these past few months has demonstrated he is taking another, less charitable, approach: if OpenAI can bog down competitors with arduous regulations they will never have to give up their lead.

    So sure, you could say that this is consistent with their stated views on AI safely—they naturally trust their own development safeguards more than they trust others—but it is also hypocritical and dishonest.

  • Paul Ford:

    Dad wrote opaque, elliptical, experimental works of enormous profanity… The upshot was 70 years of writing on crumbling yellow onionskin, dot-matrix prints with the tractor feeds still attached, and bright white laser output, along with more than 10,000 ancient WordPerfect files and blog entries, including many repeats. Now all mine to archive.

    […]

    After I parsed and processed and batched his digital legacy, it came to 7,382 files and around 7 gigabytes.

    The sum of Frank took two days and nights to upload to the Internet Archive

    […]

    In time, we all end up in a folder somewhere, if we’re lucky. Frank belongs to the world now; I released the files under Creative Commons 0, No Rights Reserved. And I know he would have loved his archive.

    Visit Frank on the Internet Archive.

  • I was excited when StabilityAI—the company behind Stable Diffusion—launched StableLM, their open source language model with a commercially permissive license. I was convinced it would become the new hub for open source community development.

    Prior to the announcement, developers had coalesced around Meta’s LLaMA model which had always been a somewhat tenuous situation. It was initially only available to select researchers before it was leaked to the public. Since then, the company hasn’t been entirely clear in its messaging. On one hand, Mark Zuckerberg has expressed a desire to commodify generative AI through open source contributions. On the other hand, they have been issuing DMCA takedown requests for seemingly innocuous projects that incorporate LLaMA.

    Now, two months after StableLM’s launch, it has become clear how difficult it is to redirect inertia. The open source community has continued contributing to LLaMA and development on StableLM has stalled. As I write this, there have been no updates to the StableLM code since April.

    Well, it seems like Meta might be on the verge of announcing a successor to LLaMA with a more permissive license, allowing for commercial use.

    Sylvia Varnham O’Regan, Jon Victor, and Amir Efrati, The Information:

    Meta is working on ways to make the next version of its open-source large-language model—technology that can power chatbots like ChatGPT—available for commercial use, said a person with direct knowledge of the situation and a person who was briefed about it. The move could prompt a feeding frenzy among AI developers eager for alternatives to proprietary software sold by rivals Google and OpenAI.

    Although Meta didn’t originally indend for the open source language model community to form around their models, they may as well come out and fully embrace it. It is their best chance at disrupting Microsoft and Google dominance.

  • Watching language model tooling slowly mature, it is interesting to see a progressive constraining of capabilities.

    Historically, programming languages have become more abstract (”higher-lever”) over time:

    Assembly → C → Python

    With language models, we may have arrived at the highest possible level of abstraction—natural language—and now we are beginning to wrap back around the other way.

    A high level of abstraction is great in that it lowers the barrier to entry for programming but it comes with the cost of increased ambiguity. Sure, your compiler can now try to guess your intentions but that doesn’t mean you would always like it to do that.

    Even more important is the fact that language models are non-deterministic. That is, each successive time you run your “program” you might receive a different output.

    This is a huge problem, almost a non-starter when it comes to integrating LLMs into traditional programming pipelines. That is why so much research has gone into making LLMs reliably output a more constrained set of tokens that can be validated according to a predetermined schema.

    JSONformer, GPT-JSON, and Guidance are all examples of prior work along these lines.

    Well, earlier this week OpenAI announced a new API endpoint that points to models they finetuned for exactly this purpose.

    OpenAI:

    Developers can now describe functions to gpt-4-0613 and gpt-3.5-turbo-0613, and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT’s capabilities with external tools and APIs.

    I can’t wait to see what people are able to accomplish using these new capabilities.

  • Byron Tau and Dustin Volz, The Wall Street Journal, The Wall Street Journal:

    The vast amount of Americans’ personal data available for sale has provided a rich stream of intelligence for the U.S. government but created significant threats to privacy, according to a newly released report by the U.S.’s top spy agency.

    Commercially available information, or CAI, has grown in such scale that it has begun to replicate the results of intrusive surveillance techniques once used on a more targeted and limited basis, the report found.

    Intelligence agencies don’t need to request a warrant for a piece of information if they can purchase it from public sources instead.

    The proliferation of data brokers who specialize in compiling and selling sensitive information has only exacerbated this problem.

    Quoted directly from the report:

    Under the U.S. Constitution… CAl is generally less strictly regulated than other forms of information acquired by the [intelligence community (IC)], principally because it is publicly available. In our view, however, changes in CAl have considerably undermined the historical policy rationale for treating [publicly available information (PAI)] categorically as non-sensitive information, that the IC can use without significantly affecting the privacy and civil liberties of U.S. persons. For example, under Carpenter v. United States, acquisition of persistent location information… concerning one person by law enforcement from communications providers is a Fourth Amendment “search” that generally requires probable cause. However, the same type of information on millions of Americans is openly for sale to the general public. As such, IC policies treat the information as PAl and IC elements can purchase it.

    I understand that it would be foolish to expect intelligence agencies to abide by a stricter set of data privacy rules than civilians. Still, I don’t feel great about public money being used to support and encourage data brokers.

    In the end, you can’t sell what you don’t have. This report reinforces my view that end-to-end encryption should be the only acceptable solution for storing personal information.

  • OpenAI:

    In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still produce logical mistakes, often called hallucinations.

    […]

    We can train reward models to detect hallucinations using either outcome supervision, which provides feedback based on a final result, or process supervision, which provides feedback for each individual step in a chain-of-thought… We find that process supervision leads to significantly better performance, even when judged by outcomes.

    This technique was evaluated using questions from a large mathematics dataset. This is an important caveat as math is a domain that is well-versed in the practice of “showing your work.” Presumably GPT-4’s training corpus includes many instances of people walking through math problems step-by-step. The relative preformance of process supervision when it comes to questions from other domains is still unknown.

  • Facebook has released another open source model as they work to commodify generative AI.

    Facebook Research:

    We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation… we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output.

    I was amazed by Google’s MusicLM model earlier this year. Facebook provides side-by-side comparisons here that demonstrate MusicGen is clearly superior. It isn’t an enormous leap, but audio generated using Google’s model has a distinct “compressed” quality that is greatly diminished in Facebook’s implementation.

    More importantly, MusicGen is completely open. Google only recently allowed beta testing of MusicLM through their AI Test Kitchen App and, even so, generations are limited to 20 seconds. Here, Facebook released both their code and model weights on GitHub and spun up a Colab notebook demo.

  • It is official: Cortana is dead.

    Microsoft:

    We are making some changes to Windows that will impact users of the Cortana app. Starting in late 2023, we will no longer support Cortana in Windows as a standalone app.

    […]

    We know that this change may affect some of the ways you work in Windows, so we want to help you transition smoothly to the new options. Instead of clicking the Cortana icon and launching the app to begin using voice, now you can use voice and satisfy your productivity needs through different tools.

    They go on to pitch their new GPT-powered “Copilot” features.

    Watch out Google Assistant, you’re next.

  • AI generated video still lags behind AI imagery by quite a large margin. Still, some artists are forging ahead and exploring what is possible with the tools available today.

    Will Douglas Heaven, MIT Technology Review:

    The Frost is a 12-minute movie in which every shot is generated by an image-making AI.

    […]

    To make The Frost, Waymark took a script written by Josh Rubin, an executive producer at the company who directed the film, and fed it to OpenAI’s image-making model DALL-E 2… Then they used D-ID, an AI tool that can add movement to still images, to animate these shots, making eyes blink and lips move.

    Some of the characters almost look like Kara Walker’s paper cutout silhouettes, others have more detail, as if they were assembled out of various magazine clippings, none of them look alive. There is a pervasive surreal sense that everything in The Frost’s world has been reanimated. It is weird and new and fascinating. Highly recommended.

  • As the dust settles on Apple’s Vision Pro headset announcement, critical reactions are mostly all about the same thing: wearing big goggles around other people is weird and no one is going to want to do it.

    Ben Thompson articulated this general critique quite clearly:

    I didn’t even get into one of the features Apple is touting most highly, which is the ability of the Vision Pro to take “pictures” — memories, really — of moments in time and render them in a way that feels incredibly intimate and vivid.

    One of the issues is the fact that recording those memories does, for now, entail wearing the Vision Pro in the first place, which is going to be really awkward!

    …it’s going to seem pretty weird when dad is wearing a headset as his daughter blows out birthday candles

    This isn’t the first time we’ve had to contend with weird new technology. Matt Birchler offers the two most likely paths the Vision Pro might take:

    The question is, what’s this going to be like:

    1. AirPods, which many people thought looked silly at first but then people got used to them.
    2. Camcorders, which took decades to go from kinda awkward to mainstream over decades and massive advances in the tech.

    When AirPods first launched, I remember how viscerally strange I found them. Now, not only do I use AirPods religiously, I don’t even remember why I thought they were so weird in the first place. If Apple can pull that off again, we will be in for a wild next few years.

  • Apple kicked off its annual WWDC conference on Monday. Here are my initial impressions after watching the keynote:

    macOS, iOS, and watchOS

    • There were a lot of mentions of “on-device intelligence” and “machine learning.” No one said “AI.”
    • There is a new Mac Pro with Apple Silicon as well as a 15” MacBook Air. Both will be available next week.
    • The iOS 17 presentation started with “big updates to the Phone app” which I would have never in a million years guessed. I will admit, the new “Live Voicemail” feature looks great though.
    • The long segment dedicated to iMessage’s “new Stickers expirence” should put to rest fears that Apple would ever feel pressed for time. Indeed, the keynote was over two hours long.
    • Autocorrect in iOS 17 is powered by “a transformer-based on-device language model.” It will be able to correct on a sentence level rather than individual words.
    • The Journal app that was rumored is real but won’t be available at launch—it is coming “later this year”
    • Interactivity is coming to widgets on all platforms. On macOS Sonoma, you will be able to add widget to the desktop.
    • You will be able to set multiple simultaneous timers
    • Death Stranding will be coming to Apple Silicon Macs. There was no mention of it during the later headset discussion.
    • Safari gets a new Profiles feature. I’ve always loved Containers in Firefox and have missed them since switching to Safari. It seems like a logical extension of the OS-wide Focus Mode feature they introduced last year.
    • WatchOS 10 is launching with a comprehensively redesigned interface. A notable exception is the “honeycomb” app launcher which appears unchanged.
    • There is still no ability for third party developers to create custom watch faces. Apple is offering the consolation prize of “a new Snoopy and Woodstock watch face.”
    • iPadOS got… no new pro-level features? I am kind of shocked Apple didn’t save their recent Final Cut and Logic Pro app release announcement for this event.

    One more thing

    • It is official: Apple announced their new XR googles and they are called “Vision Pro”
    • Apple is calling this their first “spatial computing“ device which is a better descriptor than AR/VR/XR
    • They really do look a lot like ski goggles
    • There is a screen on the front of them that displays your eyes. It is a weird concept that was executed in a much better way than I would have ever expected. The more I think about it—and I can’t believe I’m saying this—it might be the defining innovation here. I expect to see it copied by other hardware makers soon.
    • The hardware looks bulky and awkward. The software, UX, and design language, though, looks incredible.
    • For input, there is eye tracking, hand gesture recognition, voice, and a virtual keyboard. Vision Pro also works with physical Magic Keyboards and game controllers.
    • The headset can capture 3D photographs and videos
    • It has two hours of battery life with an external battery
    • Leading up to this event, a lot of people were speculating the Vision Pro would be cheaper than it’s rumored price of $3000—in reality, it will be more expensive at $3499.
    • Vision Pro is clearly a first generation product. It is expensive and has a short battery life even with bulky hardware and an external battery pack. Waiting for the second generation version is unquestionably the smartest decision. It is going to be extremely temping, though. At least I’ll have some time to decide—it will be available to purchase next year.
    • I can’t wait to try them
  • NPR’s Planet Money podcast has just concluded a three part series where they used generative AI to write an episode for them.

    Kenny Malone, Planet Money:

    In Part 1 of this series, we taught AI how to write an original Planet Money script by feeding it real research and interviews. In Part 2, we used AI to clone the voice of our former colleague Robert Smith. 

    Now, we’ve put everything together into a 15-minute Planet Money episode.

    I didn’t find the simulated Robert Smith voice to be particularly convincing but that might be because I have so much experience listening to the real Robert Smith. I think AI generated voices are already good enough to tackle many lower stakes applications but pacing and inflection are just too important to podcasting and we are just not quite there yet.

    In terms of content, I thought the episode was, at times, slightly nonsensical and bland but overall totally passable. If I wasn’t primed in advance to expect AI content, there is a chance I wouldn’t have noticed.

    I don’t think I would feel particularly good about spending too much of my time listening to wholly AI generated podcasts but I think it is somewhat inevitable once the voice simulation technology improves.

    It would be fascinating to see the Planet Money team revisit this experiment in a few years.

  • Guanzhi Wang et al.:

    We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention… Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch

    […]

    Voyager exhibits superior performance in discovering novel items, unlocking the Minecraft tech tree, traversing diverse terrains, and applying its learned skill library to unseen tasks in a newly instantiated world. Voyager serves as a starting point to develop powerful generalist agents without tuning the model parameters.

    Last month we saw Sanford researchers create a version of The Sims inhabited by LLM-powered agents. These agents exhibited surprisingly complex social skills.

    This new research shows that agents based on a similar architecture can create and explore in novel environments.

    As this technology becomes less expensive, we will start to see incredible new virtual experiences that were previously unimaginable.

    As capabilities improve further, we will reach the point where we pass some sort of fundamental threshold—like the uncanny valley—where the characters that inhabit our virtual environments become too lifelike.

    At its height, people spent a lot of time playing Second Life and it, well, looked like Second Life. We don’t even need hyperrealistic experiences for things to start getting scary, though. Imagine a version of Grand Theft Auto where every NPC has their own unique set of ambitions and relationships. I wouldn’t be surprised if someone could hack that together with the technology available today. Once that happens, we will need to start having some difficult conversations.

subscribe via RSS