• Matt Webb recently shared an approach to controlling smart home infrastructure with language models—a step towards his ultimate goal of creating “a new operating system for physical space”

    I spent Friday night and Saturday at the London AI Hackathon… I buddied up with old colleague Campbell Orme and together we built Lares: a simulation of a smart home, with working code for an generative-AI-powered assistant.

    […]

    It’s using the ReAct pattern, which is straightforward and surprisingly effective… This pattern gets the AI to respond by making statements in a Thought/Action/PAUSE/Observation loop

    […]

    Generally with the ReAct pattern the tools made available to the AI allow it to query Google, or look up an article in Wikipedia, or do a calculation… For Lares we made the smart home into a tool. We said: hey here are the rooms, here are the devices, and here are their commands, do what you want.

    After a certain point, especially once you give an AI agent the ability to act on your behalf—turn on and off your lights, send emails as you, lock and unlock the doors to your house…—security vulnerabilities start to become a serious concern.

    In a recent blog post, Simon Willison proposed a potential solution to prompt injection attacks. He suggests filtering all user requests through a bespoke “security” LLM before sending it off to a more powerful “agent” LLM:

    I think we need a pair of LLM instances that can work together: a Privileged LLM and a Quarantined LLM.

    The Privileged LLM is the core of the AI assistant. It accepts input from trusted sources—primarily the user themselves—and acts on that input in various ways.

    It has access to tools: if you ask it to send an email, or add things to your calendar, or perform any other potentially destructive state-changing operation it will be able to do so, using an implementation of the ReAct pattern or similar.

    The Quarantined LLM is used any time we need to work with untrusted content—content that might conceivably incorporate a prompt injection attack. It does not have access to tools, and is expected to have the potential to go rogue at any moment.

    It has become increasingly clear that the process of creating robust systems that incorporate language models is going to look very similar to “traditional" programming. Sure, it might be an extremely “high level” programming language but it still carries many of the existing complexities that have always been present.

  • Benjamin Mayo:

    Humane, the secretive startup founded by ex-Apple software design chief Imran Chaudri, finally went public with Chaudri showing off their device for the first time at the TED conference last week…

    Chaudri’s talk is centered on the premise that technology (mainly through the smartphone) has invaded all of our lives too much. The idea is that personalized artificial intelligence can be used to dramatically change how we interact with technology. Rather than proactively opening an app to do something, AI can be an ambient thing that is there when you need it, works in the background of your life, and mostly stays out of your way.

    To make this a reality, Humane is introducing a new product: a wearable that resembles a rectangular pin badge. Chaudri is wearing one on his jacket pocket during the presentation. He sets out the vision of their product as something that is “screenless, seamless and sensing”.

    There is something that is just fundamentally cool about Humane’s product—it just feels like a device from the future.

    The problem is Chaudri’s insistence that their device is a replacement for smartphones. John Gruber recently wrote a great piece about this:

    So far, it feels like Humane’s entire premise is founded on that same mistake: building a new device intended to replace our phones, without that new device being able to do any of the dozens of things we love to do on our phones that require a display. Apple Watch and AirPods thrive because they’re satellites to our iPhones, not ostensible replacements… Anything that attempts to establish a post-phone beachhead has to do the things we love to do with our phones, or entertain us in new ways that make us forget about them. I don’t see how a laser projector on a chest badge does that.

    Humane is so close to building the product I have been dreaming of. But for them to succeed, they first need to accept that, until their device is ten times better than the smartphone, it won’t supersede smartphones as the center of personal computing.

    Honestly, Humane should consider scrapping the whole projector idea and focus their efforts on making an incredible app. But if Chaudri really wants to build a viable hardware project, it must be an accessory to the smartphone.

    There is a sense in which any hardware project is doomed from the start, though. If Humane were to ever create a wearable that sees widespread success, Apple will undoubtably sherlock the technology and incorporate it into a “next generation” Apple Watch—I am sure they already have a similar R&D project on the back burner, just in case.

    At the end of the day, I am rooting for Humane but that doesn’t mean I am optimistic.

  • Sarah Perez, TechCrunch:

    Launched last week to global users after initially being a subscriber-only addition, Snapchat’s new AI chatbot powered by OpenAI’s GPT technology is now pinned to the top of the app’s Chat tab where users can ask it questions and get instant responses. But following the chatbot’s rollout to Snapchat’s wider community, Snapchat’s app has seen a spike in negative reviews amid a growing number of complaints shared on social media.

    Over the past week, Snapchat’s average U.S. App Store review was 1.67, with 75% of reviews being one-star, according to data from app intelligence firm Sensor Tower.

    I was optimistic about Snapchat’s My AI feature when it initially launched last month:

    Snapchat has a new AI chatbot. They are, in hindsight, the perfect company to experiment with personality-driven chat. They have a younger user base, less fear of upsetting a stodgy corporate audience, and a history of being an early adopter to strange new technologies.

    That was evidently an incorrect analysis. While it might be true the Snapchat company is well positioned to experiment with emerging technologies, the Snapchat user base certainly doesn’t universally appreciate being subject to these experiments.

    On further reflection, I think the general principal that I wrote about a few weeks ago in regards to Google can be applied more broadly:

    Generative AI is a fundamentally new technology; therefore, you should allow that to guide you into new products that were impossible or impractical previously. Attempting to shoehorn AI into existing products will be awkward, at best.

    At the very least, if you are committed to the ill-advised “shoehorn” strategy, you should make these new features optional, ideally opt-in. No one appreciates it when a well-known user interface suddenly changes—no matter the reason that prompted the change.

  • Nilay Patel, The Verge:

    Here’s the basics: there’s a new track called “Heart on My Sleeve” by a TikTok user called @ghostwriter877 with AI-generated vocals that sound like Drake and The Weeknd.

    […]

    This prompted Drake and The Weeknd’s label Universal Music Group to issue a sternly-worded statement about the dangers of AI, which specifically says that using generative AI infringes its copyrights.

    […]

    The first legal problem with using AI to make a song with vocals that sound like they’re from Drake is that the final product isn’t a copy of anything… Instead, UMG and Getty Images and publishers around the world are claiming that collecting all the training data for the AI is copyright infringement

    The bottom line is that there is no clear precedent in place dictating the way generative AI relates to existing copyright law. This leaves a bit of a grey zone that creators, for now, are free to explore.

    Martine Paris, Forbes:

    In the wake of the AI-generated hit Heart on My Sleeve going viral with deepfakes of multi-platinum artists Drake and The Weeknd, pop star Grimes has invited her fans to create music with her voice.

    On Sunday night she tweeted, “I’ll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.”

    Chloe Xiang, Vice:

    A Discord server called AI Hub hosts a large community of AI music creators behind some of the most viral AI songs. This server was created on March 25 and now has over 21,000 users.

    […]

    UTOP-AI, the album created by the Discord community, features original songs using AI-generated vocals from famous artists including Travis Scott, Drake, Baby Keem, and Playboi Carti. Qo, Snoop Dogg, and twenty other people involved in the AI Hub community worked on it. 

    This album puts into practice what drew Qo and Dogg to AI music in the first place—the ability to create material for artists they wish to hear more of.

  • Nearly six months after the launch of ChatGPT—after Bard, Bing, Claude, LLaMA, and StabilityLM are subsequently released—one after another, large user generated content companies are closing off access to their data for training AI models.

    Paresh Dave, Wired:

    Stack Overflow, a popular internet forum for computer programming help, plans to begin charging large AI developers as soon as the middle of this year for access to the 50 million questions and answers on its service, CEO Prashanth Chandrasekar says.

    Mike Isaac, The New York Times:

    [Reddit] said on Tuesday that it planned to begin charging companies for access to its application programming interface, or A.P.I.

    […]

    Mr. Huffman said Reddit’s A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit… But for the A.I. makers, it’s time to pay up.

    Kif Leswing, CNBC:

    Twitter CEO Elon Musk threatened to sue Microsoft on Wednesday, accusing the software giant of illegally using the social media company’s data to train its artificial intelligence model.

    […]

    Musk said in December that Twitter would “pause” OpenAI’s access to its database.

    It is actually unlikely that new training data from any of these companies will be necessary any time soon. Language models need a huge amount of text in order to learn basic grammar, writing styles, and general facts. Specific, up-to-date information, on the other hand, is best integrated by plugging in external tools.

    After a while, though, it will be necessary to update the foundation model’s training data. When that happens, large companies that are able to either pay for API access or strike data exchange deals will unequally benefit.

  • § It was snowing on Tuesday and then 80 °F on Thursday. That just doesn’t feel like something that should even be possible.


    § We got three new female coturnix quails. They were mailed to us at about one month old, nearly their fully grown size. I was initially nervous about the idea of getting live birds shipped to me like this but I guess it’s pretty common, according to my local post office. They were all totally fine upon arrival and quickly adjusted to their new home.


    § I’ve discovered that my neighborhood has two bubble tea shops that recently opened within three miles of each other on the same street. I am certainly not complaining, but I would not have pegged my largely Eastern European retiree suburb as such a hot boba market.


    § Links

    § Recipes

    The Chicago restaurant I miss most has got to be The Bad Apple. I finally broke down over the weekend and tried to recreate their Even Cowgirls Get The Blues burger. I used Kenji’s burger technique and then added blue cheese, arugula, caramelized onions, and hot pepper bacon jam.

    It would certainly be better for my health if this meal didn’t turn out well but nope—there is no denying how good this was. I’m going to have to make it again ASAP.

  • Benjamin Mullin and Katie Robertson, The New York Times:

    BuzzFeed is shutting down its news division as part of an effort to cut 15 percent of its work force, the company’s chief executive, Jonah Peretti, said Thursday in a memo to employees.

    […]

    BuzzFeed will continue to publish news on HuffPost, which Mr. Peretti said in his memo was profitable and less dependent on social platforms. He added that the company was moving forward “only with parts of the business that have demonstrated their ability to add to the company’s bottom line.”

    Peretti evidently does not appreciate the fact that BuzzFeed News' true value is not reflected by the revenue it generates. BuzzFeed News gives the entire “BuzzFeed” brand a degree of legitimacy and esteem it would not otherwise have.

    Before News began publishing serious journalism and winning Pulitzers, BuzzFeed was (appropriately) synonymous with low-quality listicles, quizzes, and clickbait.

    The entire conceit was that BuzzFeed.com was the “junk food” that funded important investigative journalism—what is BuzzFeed’s purpose without News?

    In other words, BuzzFeed is loosing an essential part of its mullet, as Josh Marshall puts it:

    The journalism played an even more niche, operational role. Buzzfeed mastered the distribution element of social media very, very fast. But it had listicles and cat photos and other stuff like that. That’s tons of traffic. But it’s not the prestige play that brings you top shelf premium ad dollars. The journalism was really a loss-leader in that calculus. GM or Bacardi isn’t going to sign on to the be the exclusive sponsor of your Grumpy Cat slideshow, even if millions see it. But put a Pulitzer in the mix and it’s a very different story. There was always a big mullet aspect to these plays: prestige up front (news reporting), party in the back (listicles and memes).

  • Speaking of open source language models…

    Stability.ai:

    Today, Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters, with 15 billion to 65 billion parameter models to follow. Developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes, subject to the terms of the CC BY-SA-4.0 license.

    […]

    StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size

    Unlike LLaMA, the base model is completely free to use commercially. The instruction tuned model, however, is only licensed for noncommercial research.

    We are also releasing a set of research models that are instruction fine-tuned. Initially, these fine-tuned models will use a combination of five recent open-source datasets for conversational agents: Alpaca, GPT4All, Dolly, ShareGPT, and HH. These fine-tuned models are intended for research use only and are released under a noncommercial CC BY-NC-SA 4.0 license, in-line with Stanford’s Alpaca license.

    This limitation will likely only be temporary, though, as Stability appears to be working on putting together a new instruction tuning / RLHF dataset that will presumably be permissibly licensed.

    We will be kicking off our crowd-sourced RLHF program, and working with community efforts such as Open Assistant to create an open-source dataset for AI assistants.

    Remember, instruction tuning is what allows your prompts to be natural and conversational. For example, you might prompt the base model with “here is a list of ten dog breeds: 1)” while you could prompt the instruction tuned model “write a list of ten dog breeds.”

    Overall, this release is a huge deal if only because it creates the obvious Schelling point for future open source development work. When it was first released, Stable Diffusion was resource intensive and low quality. After a flurry of open source contributions, it quickly became the highest quality option while, at the same time, becoming efficient enough to run locally on an iPhone. If the same story occurs with StableLM, this will become a more important release than GPT-4.

  • From Bret Devereaux’s excellent series on the history and mechanics of farming:

    In places where seed-drilling devices weren’t available, seeds were sown by the broadcast method. The ground was plowed, then the seeds were thrown out over the ground (literally cast broadly; this is where our term broadcast comes from); the ridges created by plowing would cause most of the seeds to fall into the grooves (called furrows; thus a ‘furrowed’ brow being one scrunched up to create ridges and depressions that looked like a plowed field), creating very rough rows of crops once those seeds sprouted. Then the land is then harrowed (where our sense of ‘harrowing‘ comes from – seriously, so much English idiomatic expressions are farming idioms, for obvious reasons), typically with rakes and hoes to bury the seeds by flattening out the ridges (but not generally entirely erasing them) in order to cover the seeds over once they had been placed with very loose clods of earth.

  • Databricks:

    Today, we’re releasing Dolly 2.0, the first open source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.

    Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset, crowdsourced among Databricks employees.

    […]

    databricks-dolly-15k contains 15,000 high-quality human-generated prompt / response pairs specifically designed for instruction tuning large language models. Under the licensing terms for databricks-dolly-15k, anyone can use, modify, or extend this dataset for any purpose, including commercial applications.

    To the best of our knowledge, this dataset is the first open source, human-generated instruction dataset specifically designed to make large language models exhibit the magical interactivity of ChatGPT.

    The release of the “databricks-dolly-15k” instruction tuning dataset under a permissive license is a much bigger deal than the trained model itself.

    Language models will no doubt continue to face questions regarding training data provenance. Any and all datasets that are open, high quality, and free of copyright and ethics concerns will only improve the perceived legitimacy of future models.

    RedPajama, the open source 1.2 trillion token pre-training dataset, is a big deal for the same reason.

    The RedPajama base dataset is a 1.2 trillion token fully-open dataset created by following the recipe described in the LLaMA paper.

    […]

    We aim to create a fully open-source reproduction of LLaMA, which would be available for commercial applications, and provide a more transparent pipeline for research.

    Without a doubt, someone will soon train an open source language model on RedPajama’s base data and then apply RLHF fine-tuning using databricks-dolly-15k. This would be the first instruction-tuned language model that is fully unencumbered by copyright concerns.

  • Nico Grant, The New York Times:

    A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology.

    […]

    The new features, under the project name Magi, …would offer users a far more personalized experience than the company’s current service, attempting to anticipate users’ needs.

    […]

    The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information… Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction

    […]

    Last week, Google invited some employees to test Magi’s features… Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.

    I have been critical of Google’s AI strategy. Generative AI is a fundamentally new technology; therefore, you should allow that to guide you into new products that were impossible or impractical previously. Attempting to shoehorn AI into existing products will be awkward, at best.

    While we don’t know many details of what Magi will ultimately look like, I am pleasantly surprised Google appears to be taking a blank-slate approach to its design and development.

    I would love to see Google bring back the strategy they used with Inbox—create a playground to experiment with new ideas, unencumbered by tradition. When the time was right, Google took what they learned from Inbox and integrated it into Gmail. Maybe Magi will ultimately be merged into Google Search. Even so, Magi still would have played a valuable role as a test lab. If I am right, though, and generative AI will be most successful as a new product, Google would be well positioned for that, too.

  • § This week was a nice sneak preview of summer. Every day was in the mid-to-high-70s and sunny. Most days Caroline and I would be outside from the time we got home from work until sunset. We got a lot of yard work done—weeding, edging, expanding the garden beds. We went through five yards of compost in two days.


    § The seeds I planted a couple of weeks ago have all sprouted—first the tomatoes and tomatillos, then peas and basil. Finally, a few days later, all of the peppers popped up.

    I also started some summer squash and groundcherry seeds. I am especially excited about the later after eating them for the first time last summer.


    § Until now, you have only known Winter Blog. Summer Blog will have much more gardening. Don’t say you weren’t warned.


    § Succession episode three!

    I don’t think there is anything I can say that wouldn’t be a massive spoiler but… wow—watch it.


    § Links

    § Recipes

    • We made pizza in the Ooni more times than I care to admit.
    • Earlier this week, I purchased a small kaffir lime plant which prompted me to make Kenji López-Alt’s beef with basil recipe again. Adding the lime leaves made a bigger difference than I would have expected!
  • Westworld on Wikipedia:

    In the 2050s, Delos Inc. operates several theme parks, including the American Old West-themed Westworld. Each environment is populated by the “Hosts”, biomechanical robots indistinguishable from humans. The Hosts are programmed to fulfill the guests' every desire… The park’s operators create narratives for these Hosts to carry out while interacting with guests

    Joon Sung Park et al. at Stanford:

    In this paper, we introduce generative agents—computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversation.

    […]

    We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time.

    2050 seems like a pretty good prediction after all.

  • From the GitHub repository:

    Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM “thoughts”, to autonomously achieve whatever goal you set.

    The idea is that you prompt Auto-GPT with a goal—buy me the best E-bike—and then a high-level “agent” breaks down this goal into a hierarchy of tasks—research reviews, compare prices, find distributors, etc—the primary agent then delegates “sub-agents” to complete each task.

    Think of it as giving GPT-4 the ability to recursively call itself.

    Additionally, each agent has access to a variety of tools. For example, they can use the internet, execute code, and store information in short & long term memory.

    Auto-GPT’s developer Toran Richards, in an interview with Vice:

    The ability to function with minimal human input is a crucial aspect of Auto-GPT. It transforms a large language model from what is essentially an advanced auto-complete, into an independent agent capable of carrying out actions and learning from its mistakes

  • Kyle Wiggers, Devin Coldewey, and Manish Singh at TechCrunch:

    AI research startup Anthropic aims to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries, according to company documents obtained by TechCrunch.

    A pitch deck for Anthropic’s Series C fundraising round discloses these and other long-term goals for the company

    […]

    “These models could begin to automate large portions of the economy,” the pitch deck reads. “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.”

    […]

    Dario Amodei, the former VP of research at OpenAI, launched Anthropic in 2021 as a public benefit corporation… Amodei split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus.

    […]

    “Anthropic has been heavily focused on research for the first year and a half of its existence, but we have been convinced of the necessity of commercialization, which we fully committed to in September [2022],” the pitch deck reads.

    There is something vaguely sad about Anthropic following OpenAI in adopting a commercial-first perspective. As stated in the quote above, Anthropic was initially founded as a counter response to OpenAI’s commercialization.

    Anthropic does not even seem particularly adept at generating product hype—until now, I was under the impression they were intentionally trying to remain low-profile.

    Despite all of this, I think it is a smart business move to make—OpenAI can’t be the only company selling access to state-of-the-art generative AI APIs—I guess I just wish it was another company that filled the void and that Anthropic was more devoted to maintaining its founding directive.

  • Jennifer Elias, CNBC:

    Google is reshuffling the reporting structure of its virtual assistant unit — called Assistant — to focus more on Bard, the company’s new artificial intelligence chat technology.

    […]

    The new leadership changes suggest that the Assistant organization may be planning on integrating Bard technology into similar products in the future.

    The most critical advantage Google, Amazon, and Apple have over OpenAI is that they all have existing smart assistants integrated into customer’s devices. I would love to see Google take the lead in upgrading their assistant with generative AI capabilities.

    Miles Kruppa, Wall Street Journal:

    Google plans to add conversational artificial-intelligence features to its flagship search engine, Chief Executive Officer Sundar Pichai said

    […]

    “Will people be able to ask questions to Google and engage with LLMs in the context of search? Absolutely,” Mr. Pichai said.

    […]

    Google is testing several new search products, such as versions that allow users to ask follow-up questions to their original queries, Mr. Pichai said. The company said last month that it would begin “thoughtfully integrating LLMs into search in a deeper way,” but until now hadn’t detailed plans to offer conversational features.

    I don’t know… I haven’t used Bing as an “AI search engine” in at least a month. Language models—while adjacent to traditional search engines—are an entirely new technology. As time goes on, I am less convinced integrating them into existing products is the best approach.

    Maybe, when it comes to search, Google should strive to make the best search engine it can. Down-rank SEO spam, improve operators, and innovate with new features. Don’t reimagine search, refine search.

    To be clear, I think they should continue to develop and improve Bard—but let it be its own thing, don’t just thoughtlessly tack it onto all of your old stuff.

  • Simon Willison:

    I like to think of language models like ChatGPT as a calculator for words.

    This is reflected in their name: a “language model” implies that they are tools for working with language. That’s what they’ve been trained to do, and it’s language manipulation where they truly excel.

    Want them to work with specific facts? Paste those into the language model as part of your original prompt!

    […]

    A calculator for words is an incredibly powerful thing.

    “A calculator for words” is a great analogy for language models. It is the framing the ultimately clicked for me when ChatGPT first made it clear that generative AI was going to quickly change some of our longstanding education paradigms.

    From a post I wrote in December 2022:

    The most exciting path forward is one where we frame Large Language Models as “a calculator for text”. Just as the invention of pocket calculators was a giant disruption that forced us to re-evaluate our approach to mathematics education, language models will continue to force us to re-evaluate our approach to research and writing. Done correctly this will open the door for us to learn more quickly, use our time more effectively, and progress further than we possibly could before.

  • § Happy Easter

    This was my first week back at work after a blissfully long spring break. It was honestly nice to see my students again—do I wish I had another week off, though? Absolutely.


    § I harvested and ate the oyster mushrooms I planted last week—they grew so much faster than I expected them to!


    § For the second time in just over a year, our washing machine suddenly decided to stop working. Home appliance shopping is always a pleasure.


    § Throughout the process of building my greenhouse, I keep going back to the daydream of owning some giant tract of land where I can build and experiment without concern for permits and zoning and neighbors. The freedom and space to create was one of the biggest motivators for me as I was moving back here from Chicago; it feels good to finally take advantage of that and I look forward to doing more.

    I still want to visit Ryan Trecartin’s rural amusement park.


    § Links

    § Recipes

    • Spring orecchiette pasta with burrata
      • Really good but quite difficult to make well with brittle gluten free pasta. I’ll certainly try again another time with a different brand of pasta.
    • Moroccan chicken bowl
    • Mushroom pesto pasta
      • This is how I ended up using most of the aforementioned mushrooms. It was good, but perhaps it was not the best way to showcase them—the pesto was pretty overpowering.
  • Matt Webb:

    AI makes the composition of quote-creative-unquote works cheap.

    But AI is the instrument. There is still the question of the composer. Somebody needs to decide and prompt exactly what music my electric vehicle should perform.

    Though I do feel like generative AI will mean that decoration, ornament and filigree becomes cheap again? And maybe we’ll move into an aesthetic in which our furniture, white goods, and accessories superficially resemble the busy-busy arts and crafts era - but actually it’s because, well, it costs almost nothing to do (it’s just software) and it makes the object look NEW.

    The proliferation of smartphones with built-in digital cameras allowed more people to take more photos than ever before. Synthesizers and DAWs had a similar effect on music.

    If generative AI similarly lowers the barrier to entry for all forms of text, imagery, audio, and video, perhaps it will lead to a further democratization of creative expression.

  • Jack Clark:

    Financial data behemoth Bloomberg has built ‘BloombergGPT’, a language model based in part on proprietary data from Bloomberg.

    […]

    I think of BloombergGPT as more like a silicon librarian/historian than a model; by training it on a huge amount of private and internal Bloomberg data, the LLM is in effect a compressed form of ‘institutional memory’ and a navigator of Bloomberg’s many internal systems… Systems like BloombergGPT will help companies create software entities that can help to navigate, classify, and analyze the company’s own data stack.

    This is one of the most compelling uses for language models to date.

    It is what Microsoft is bringing to all of their 365 enterprise customers with their upcoming Business Chat agent and it is what I would like to see Apple implement across their ecosystem with “Siri 2.0”.

    It is also a little scary. If all of your personal or institutional knowledge is stored in an unintelligible tangle of model weights, what happens if it gets poisoned, corrupted, or stolen?

  • I recently got access to Anthropic’s Claude language model. A few thought on how it compares to GPT-4 and Bard:

    • Claude is really fast — easily twice as fast as GPT-4.
    • By default, Claude is often much more concise than the other language models. Claude’s answer to the question “When was the bronze age?” was only 11 words long while GPT-4’s was 135 and Bard’s was 138.
    • In my review of Bard, I noted that it is noticeably less “steerable” than GPT-4. Claude is significantly better in this regard. Maybe it is not quite as good as GPT-4 but it is certainly close.

    Which flag am I describing? Three horizontal stripes: blue, white, blue. Slightly left of center is a solid yellow circle. Surrounding that circle is a red letter “C.”

    Claude:

    It sounds like you are describing the flag of Chile.

    GPT-4:

    You are describing the flag of the U.S. state of Colorado. The flag consists of three horizontal stripes of equal width: the top and bottom stripes are blue, and the middle stripe is white. Slightly left of center is a solid yellow circle, which is surrounded by a red letter “C.” The design symbolizes Colorado’s natural features, such as the blue sky, white snow, and abundant sunshine, and the red “C” represents the state’s name.

    Bard:

    The flag you are describing is the flag of the state of Colorado. It was adopted on March 11, 1911. The flag is a horizontal tricolor of blue, white, and blue, with a yellow circle slightly left of center. The circle contains a red letter “C”. The colors of the flag represent the sky, the snow-capped mountains, and the Rocky Mountains. The blue circle represents the sun, and the red letter “C” represents the state of Colorado. The flag is flown at all state buildings and schools in Colorado. It is also flown at sporting events and other public gatherings.

    The answer I was looking for is Colorado. GPT-4 and Bard answered this question correctly. Claude’s “Chile” answer is totally out of left field.


    What is an antonym for the word that describes a fear of confined spaces? Your answer should be one word only.

    Claude:

    Agoraphobia

    GPT-4:

    Agoraphobia

    Bard:

    The antonym for the word that describes a fear of confined spaces is agoraphobia. Agoraphobia is a fear of open spaces or public places.

    Claustrophobia is a fear of enclosed or confined spaces.

    Here, all three models correctly answered “agoraphobia”. Bard looses points for not limiting its answer to one word, though.


    I have a perfectly square piece of paper with a red “X” drawn in the center, extending all the way across the page. What would this drawing look like if I were to fold the paper in half?

    Claude:

    If you fold a square piece of paper in half with a large red ‘X’ drawn across the center, it would look like this:

    [A red X folded in half on a square sheet of paper]

    The vertical line of the ‘X’ would be folded over on top of itself, and the horizontal line of the ‘X’ would also be folded over on itself. So you would see half of the ‘X’ on either side of the folded paper. The intersecting center of the ‘X’ would be along the folded crease of the paper.

    GPT-4:

    When you fold the perfectly square piece of paper in half, you’ll have two possible outcomes depending on the direction you fold it:

    1. Folding the paper horizontally: In this case, you will fold the paper so that the top edge aligns with the bottom edge. The red “X” will now look like two connected “V” shapes, one being the upper half of the original “X” and the other being the lower half. The connected point of the two “V” shapes will be on the fold.

    2. Folding the paper vertically: In this case, you will fold the paper so that the left edge aligns with the right edge. The red “X” will now look like two connected sideways “V” shapes (or “greater than” and “less than” symbols), one being the left half of the original “X” and the other being the right half. The connected point of the two sideways “V” shapes will be on the fold.

    In both cases, the folded paper will have a symmetrical shape of the red “X” along the fold.

    Bard:

    If you fold the paper in half along a vertical or horizontal axis, the X will appear as two identical Xs, one on top of the other. If you fold the paper in half along a diagonal axis, the X will appear as a single X that is twice as wide as it is tall.

    GPT-4’s answer here is very verbose — but bonus points for being thorough I guess? Claude’s answer was okay, but it was slightly nonsensical and the bracketed description in the middle was an interesting choice. Bard totally missed the mark here.

  • Thoughtworks:

    Time Sense is a wearable sensory headband which allows the wearer to feel the passing of the 24-hour clock around the circumference of the head. As the day progresses, a tiny heat sensation passes the length of the headband.

    This device is an example of an ‘exosense’, an external sensory organ. This means it is designed to be worn and felt consistently, twenty-four hours a day, seven days a week.

    For a little while, I had a setting enabled on my Apple Watch that caused a quick haptic alert to occur at the top of each hour. I thought this would help snap me out of situations where I get sidetracked and loose track of time. Well, it did help with that, but I quickly realized that I do not like having such a constant, physical reminder of the passage of time. It was like some terrible combination of a super power and a memento mori.

  • Ross Douthat:

    A collection of Silicon Valley notables, including Elon Musk, just signed an open letter urging at least a six-month pause in large-scale A.I. experiments to allow our safety protocols to catch up

    […]

    Generally, when human beings turn against a technology or move to restrain it, we have a good idea of what we’re afraid of happening, what kind of apocalypse we’re trying to forestall. The nuclear test ban treaties came after Hiroshima and Nagasaki, not before.

    Or a less existential example: The current debate about limiting kids’ exposure to social media is potent because we’ve lived with the internet and the iPhone for some time; we know a lot about what the downsides of online culture seem to be. Whereas it’s hard to imagine persuading someone to pre-emptively regulate TikTok in the year 1993.

    There are certainly groups of people—that I fully respect—who have long pushed for drastic measures to be taken towards AI alignment.

    There are others—programmers, marketers, and other white collar workers—who have felt a sudden plunge in job their security. That is legitimately scary and I can not criticize them for feeling nervous.

    There is a third group—employees and executives at large tech companies—that are uncomfortable about the current trajectory of AI for an entirely different reason: they feel left behind.

    The letter feels like that third group taking advantage of the anxieties of the first two. Any development “pause” that would result from this would only give competing companies time to catch up to OpenAI.

  • § I got a bunch of seeds planted and set up under grow lights in the basement.

    Here is what I am starting with: cherry tomatoes, Cherokee purple tomatoes, purple & green tomatillos, shishito peppers, jalapeño peppers, Thai chili peppers, snap peas, and Thai basil.

    I am also starting some herbs and salad greens directly in the cold frame outside.

    Finally, I am trying to grow oyster mushrooms for the first time. Stay tuned for how that turns out.


    § Despite deconstructing and moving the entire greenhouse to another location in my yard, I was able to finish the framing, the roof, and a couple of the walls. It has been really exciting watching this project finally come together. All that is left is to finish the walls and the doorway.


    § I have been rewatching Succession in preparation for season 4 and I found I am enjoying it much more this time around. Maybe give it another try if you are in the same boat.


    § Links

    § Recipes

    • Baked feta pasta
      • I am excited to make this recipe again in the summer once my garden begins overwhelming me with cherry tomatoes
    • Miso glazed salmon
      • Not a fan of this recipe. It could have easily been something I did wrong, though.
  • A little over two months ago I wrote this in response to Buzzfeed piloting AI personalized quizzes:

    There is no need to reject the use of new technologies; by all means, experiment! But I am worried using AI to create content out of whole cloth risks devaluing all of the work you produce. Instead, using AI for personalization and curation will be much healthier step forward. I think BuzzFeed is on the right track here. CNET, less so.

    Well, it looks like the Buzzfeed recently began pivoting to giving AI a more editorial role.

    Noor Al-Sibai and Jon Christian, Futurism:

    This month, we noticed that with none of the fanfare of [Buzzfeed CEO] Peretti’s multiple interviews about the quizzes, BuzzFeed quietly started publishing fully AI-generated articles that are produced by non-editorial staff — and they sound a lot like the content mill model that Peretti had promised to avoid.

    The 40 or so articles, all of which appear to be SEO-driven travel guides, are comically bland and similar to one another.

    […]

    a note on the top [of these articles] says they were “collaboratively written” with a human employee.

    Are those human employees BuzzFeed journalists? No. Instead, they’re non-editorial employees who work in domains like client partnerships, account management, and product management.

    BuzzFeed spokesperson told us that the AI-generated pieces are part of an “experiment” the company is doing to see how well its AI writing assistance incorporates statements from non-writers.

    Now, to be fair, these are articles for Buzzfeed, not Buzzfeed News, which is an independent news organization. What it is, though, is a testament to how strong the pull towards AI will be once companies realize its potential—for better or worse.

subscribe via RSS